• WWW.TECHNOLOGYREVIEW.COM
    AI is coming for music, too
    The end of this story includes samples of AI-generated music. Artificial intelligence was barely a term in 1956, when top scientists from the field of computing arrived at Dartmouth College for a summer conference. The computer scientist John McCarthy had coined the phrase in the funding proposal for the event, a gathering to work through how to build machines that could use language, solve problems like humans, and improve themselves. But it was a good choice, one that captured the organizers’ founding premise: Any feature of human intelligence could “in principle be so precisely described that a machine can be made to simulate it.”  In their proposal, the group had listed several “aspects of the artificial intelligence problem.” The last item on their list, and in hindsight perhaps the most difficult, was building a machine that could exhibit creativity and originality. At the time, psychologists were grappling with how to define and measure creativity in humans. The prevailing theory—that creativity was a product of intelligence and high IQ—was fading, but psychologists weren’t sure what to replace it with. The Dartmouth organizers had one of their own. “The difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness,” they wrote, adding that such randomness “must be guided by intuition to be efficient.”  Nearly 70 years later, following a number of boom-and-bust cycles in the field, we now have AI models that more or less follow that recipe. While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains. By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman.  Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. AI-generated creative works—from orchestra performances to heavy metal—are poised to suffuse our lives more thoroughly than any other product of AI has done yet. The songs are likely to blend into our streaming platforms, party and wedding playlists, soundtracks, and more, whether or not we notice who (or what) made them.  For years, diffusion models have stirred debate in the visual-art world about whether what they produce reflects true creation or mere replication. Now this debate has come for music, an art form that is deeply embedded in our experiences, memories, and social lives. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI.  The courts are actively grappling with this murky territory. Major record labels are suing the top AI music generators, alleging that diffusion models do little more than replicate human art without compensation to artists. The model makers counter that their tools are made to assist in human creation.   In deciding who is right, we’re forced to think hard about our own human creativity. Is creativity, whether in artificial neural networks or biological ones, merely the result of vast statistical learning and drawn connections, with a sprinkling of randomness? If so, then authorship is a slippery concept. If not—if there is some distinctly human element to creativity—what is it? What does it mean to be moved by something without a human creator? I had to wrestle with these questions the first time I heard an AI-generated song that was genuinely fantastic—it was unsettling to know that someone merely wrote a prompt and clicked “Generate.” That predicament is coming soon for you, too.  Making connections After the Dartmouth conference, its participants went off in different research directions to create the foundational technologies of AI. At the same time, cognitive scientists were following a 1950 call from J.P. Guilford, president of the American Psychological Association, to tackle the question of creativity in human beings. They came to a definition, first formalized in 1953 by the psychologist Morris Stein in the Journal of Psychology: Creative works are both novel, meaning they present something new, and useful, meaning they serve some purpose to someone. Some have called for “useful” to be replaced by “satisfying,” and others have pushed for a third criterion: that creative things are also surprising.  Later, in the 1990s, the rise of functional magnetic resonance imaging made it possible to study more of the neural mechanisms underlying creativity in many fields, including music. Computational methods in the past few years have also made it easier to map out the role that memory and associative thinking play in creative decisions.  What has emerged is less a grand unified theory of how a creative idea originates and unfolds in the brain and more an ever-growing list of powerful observations. We can first divide the human creative process into phases, including an ideation or proposal step, followed by a more critical and evaluative step that looks for merit in ideas. A leading theory on what guides these two phases is called the associative theory of creativity, which posits that the most creative people can form novel connections between distant concepts. STUART BRADFORD “It could be like spreading activation,” says Roger Beaty, a researcher who leads the Cognitive Neuroscience of Creativity Laboratory at Penn State. “You think of one thing; it just kind of activates related concepts to whatever that one concept is.” These connections often hinge specifically on semantic memory, which stores concepts and facts, as opposed to episodic memory, which stores memories from a particular time and place. Recently, more sophisticated computational models have been used to study how people make connections between concepts across great “semantic distances.” For example, the word apocalypse is more closely related to nuclear power than to celebration. Studies have shown that highly creative people may perceive very semantically distinct concepts as close together. Artists have been found to generate word associations across greater distances than non-artists. Other research has supported the idea that creative people have “leaky” attention—that is, they often notice information that might not be particularly relevant to their immediate task.  Neuroscientific methods for evaluating these processes do not suggest that creativity unfolds in a particular area of the brain. “Nothing in the brain produces creativity like a gland secretes a hormone,” Dean Keith Simonton, a leader in creativity research, wrote in the Cambridge Handbook of the Neuroscience of Creativity.  The evidence instead points to a few dispersed networks of activity during creative thought, Beaty says—one to support the initial generation of ideas through associative thinking, another involved in identifying promising ideas, and another for evaluation and modification. A new study, led by researchers at Harvard Medical School and published in February, suggests that creativity might even involve the suppression of particular brain networks, like ones involved in self-censorship.  So far, machine creativity—if you can call it that—looks quite different. Though at the time of the Dartmouth conference AI researchers were interested in machines inspired by human brains, that focus had shifted by the time diffusion models were invented, about a decade ago.  The best clue to how they work is in the name. If you dip a paintbrush loaded with red ink into a glass jar of water, the ink will diffuse and swirl into the water seemingly at random, eventually yielding a pale pink liquid. Diffusion models simulate this process in reverse, reconstructing legible forms from randomness. For a sense of how this works for images, picture a photo of an elephant. To train the model, you make a copy of the photo, adding a layer of random black-and-white static on top. Make a second copy and add a bit more, and so on hundreds of times until the last image is pure static, with no elephant in sight. For each image in between, a statistical model predicts how much of the image is noise and how much is really the elephant. It compares its guesses with the right answers and learns from its mistakes. Over millions of these examples, the model gets better at “de-noising” the images and connecting these patterns to descriptions like “male Borneo elephant in an open field.”  Now that it’s been trained, generating a new image means reversing this process. If you give the model a prompt, like “a happy orangutan in a mossy forest,” it generates an image of random white noise and works backward, using its statistical model to remove bits of noise step by step. At first, rough shapes and colors appear. Details come after, and finally (if it works) an orangutan emerges, all without the model “knowing” what an orangutan is. Musical images The approach works much the same way for music. A diffusion model does not “compose” a song the way a band might, starting with piano chords and adding vocals and drums. Instead, all the elements are generated at once. The process hinges on the fact that the many complexities of a song can be depicted visually in a single waveform, representing the amplitude of a sound wave plotted against time.  Think of a record player. By traveling along a groove in a piece of vinyl, a needle mirrors the path of the sound waves engraved in the material and transmits it into a signal for the speaker. The speaker simply pushes out air in these patterns, generating sound waves that convey the whole song.  From a distance, a waveform might look as if it just follows a song’s volume. But if you were to zoom in closely enough, you could see patterns in the spikes and valleys, like the 49 waves per second for a bass guitar playing a low G. A waveform contains the summation of the frequencies of all different instruments and textures. “You see certain shapes start taking place,” says David Ding, cofounder of the AI music company Udio, “and that kind of corresponds to the broad melodic sense.”  Since waveforms, or similar charts called spectrograms, can be treated like images, you can create a diffusion model out of them. A model is fed millions of clips of existing songs, each labeled with a description. To generate a new song, it starts with pure random noise and works backward to create a new waveform. The path it takes to do so is shaped by what words someone puts into the prompt. Ding worked at Google DeepMind for five years as a senior research engineer on diffusion models for images and videos, but he left to found Udio, based in New York, in 2023. The company and its competitor Suno, based in Cambridge, Massachusetts, are now leading the race for music generation models. Both aim to build AI tools that enable nonmusicians to make music. Suno is larger, claiming more than 12 million users, and raised a $125 million funding round in May 2024. The company has partnered with artists including Timbaland. Udio raised a seed funding round of $10 million in April 2024 from prominent investors like Andreessen Horowitz as well as musicians Will.i.am and Common. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. Suno has artist pages for creators, some with large followings, who generate songs entirely with AI, often accompanied by AI-generated images of the artist. These creators are not musicians in the conventional sense but skilled prompters, creating work that can’t be attributed to a single composer or singer. In this emerging space, our normal definitions of authorship—and our lines between creation and replication—all but dissolve. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. The music industry is pushing back. Both companies were sued by major record labels in June 2024, and the lawsuits are ongoing. The labels, including Universal and Sony, allege that the AI models have been trained on copyrighted music “at an almost unimaginable scale” and generate songs that “imitate the qualities of genuine human sound recordings” (the case against Suno cites one ABBA-adjacent song called “Prancing Queen,” for example).  Suno did not respond to requests for comment on the litigation, but in a statement responding to the case posted on Suno’s blog in August, CEO Mikey Shulman said the company trains on music found on the open internet, which “indeed contains copyrighted materials.” But, he argued, “learning is not infringing.” A representative from Udio said the company would not comment on pending litigation. At the time of the lawsuit, Udio released a statement mentioning that its model has filters to ensure that it “does not reproduce copyrighted works or artists’ voices.”  Complicating matters even further is guidance from the US Copyright Office, released in January, that says AI-generated works can be copyrighted if they involve a considerable amount of human input. A month later, an artist in New York received what might be the first copyright for a piece of visual art made with the help of AI. The first song could be next.   Novelty and mimicry These legal cases wade into a gray area similar to one explored by other court battles unfolding in AI. At issue here is whether training AI models on copyrighted content is allowed, and whether generated songs unfairly copy a human artist’s style.  But AI music is likely to proliferate in some form regardless of these court decisions; YouTube has reportedly been in talks with major labels to license their music for AI training, and Meta’s recent expansion of its agreements with Universal Music Group suggests that licensing for AI-generated music might be on the table.  If AI music is here to stay, will any of it be any good? Consider three factors: the training data, the diffusion model itself, and the prompting. The model can only be as good as the library of music it learns from and the descriptions of that music, which must be complex to capture it well. A model’s architecture then determines how well it can use what’s been learned to generate songs. And the prompt you feed into the model—as well as the extent to which the model “understands” what you mean by “turn down that saxophone,” for example—is pivotal too. Is the result creation or simply replication of the training data? We could ask the same question about human creativity. Arguably the most important issue is the first: How extensive and diverse is the training data, and how well is it labeled? Neither Suno nor Udio has disclosed what music has gone into its training set, though these details will likely have to be disclosed during the lawsuits.  Udio says the way those songs are labeled is essential to the model. “An area of active research for us is: How do we get more and more refined descriptions of music?” Ding says. A basic description would identify the genre, but then you could also say whether a song is moody, uplifting, or calm. More technical descriptions might mention a two-five-one chord progression or a specific scale. Udio says it does this through a combination of machine and human labeling.  “Since we want to target a broad range of target users, that also means that we need a broad range of music annotators,” he says. “Not just people with music PhDs who can describe the music on a very technical level, but also music enthusiasts who have their own informal vocabulary for describing music.” Competitive AI music generators must also learn from a constant supply of new songs made by people, or else their outputs will be stuck in time, sounding stale and dated. For this, today’s AI-generated music relies on human-generated art. In the future, though, AI music models may train on their own outputs, an approach being experimented with in other AI domains. Because models start with a random sampling of noise, they are nondeterministic; giving the same AI model the same prompt will result in a new song each time. That’s also because many makers of diffusion models, including Udio, inject additional randomness through the process—essentially taking the waveform generated at each step and distorting it ever so slightly in hopes of adding imperfections that serve to make the output more interesting or real. The organizers of the Dartmouth conference themselves recommended such a tactic back in 1956. According to Udio co­founder and chief operating officer Andrew Sanchez, it’s this randomness inherent in generative AI programs that comes as a shock to many people. For the past 70 years, computers have executed deterministic programs: Give the software an input and receive the same response every time.  “Many of our artists partners will be like, ‘Well, why does it do this?’” he says. “We’re like, well, we don’t really know.” The generative era requires a new mindset, even for the companies creating it: that AI programs can be messy and inscrutable. Is the result creation or simply replication of the training data? Fans of AI music told me we could ask the same question about human creativity. As we listen to music through our youth, neural mechanisms for learning are weighted by these inputs, and memories of these songs influence our creative outputs. In a recent study, Anthony Brandt, a composer and professor of music at Rice University, pointed out that both humans and large language models use past experiences to evaluate possible future scenarios and make better choices.  Indeed, much of human art, especially in music, is borrowed. This often results in litigation, with artists alleging that a song was copied or sampled without permission. Some artists suggest that diffusion models should be made more transparent, so we could know that a given song’s inspiration is three parts David Bowie and one part Lou Reed. Udio says there is ongoing research to achieve this, but right now, no one can do it reliably.  For great artists, “there is that combination of novelty and influence that is at play,” Sanchez says. “And I think that that’s something that is also at play in these technologies.” But there are lots of areas where attempts to equate human neural networks with artificial ones quickly fall apart under scrutiny. Brandt carves out one domain where he sees human creativity clearly soar above its machine-made counterparts: what he calls “amplifying the anomaly.” AI models operate in the realm of statistical sampling. They do not work by emphasizing the exceptional but, rather, by reducing errors and finding probable patterns. Humans, on the other hand, are intrigued by quirks. “Rather than being treated as oddball events or ‘one-offs,’” Brandt writes, the quirk “permeates the creative product.”  STUART BRADFORD He cites Beethoven’s decision to add a jarring off-key note in the last movement of his Symphony no. 8. “Beethoven could have left it at that,” Brandt says. “But rather than treating it as a one-off, Beethoven continues to reference this incongruous event in various ways. In doing so, the composer takes a momentary aberration and magnifies its impact.” One could look to similar anomalies in the backward loop sampling of late Beatles recordings, pitched-up vocals from Frank Ocean, or the incorporation of “found sounds,” like recordings of a crosswalk signal or a door closing, favored by artists like Charlie Puth and by Billie Eilish’s producer Finneas O’Connell.  If a creative output is indeed defined as one that’s both novel and useful, Brandt’s interpretation suggests that the machines may have us matched on the second criterion while humans reign supreme on the first.  To explore whether that is true, I spent a few days playing around with Udio’s model. It takes a minute or two to generate a 30-second sample, but if you have paid versions of the model you can generate whole songs. I decided to pick 12 genres, generate a song sample for each, and then find similar songs made by people. I built a quiz to see if people in our newsroom could spot which songs were made by AI.  The average score was 46%. And for a few genres, especially instrumental ones, listeners were wrong more often than not. When I watched people do the test in front of me, I noticed that the qualities they confidently flagged as a sign of composition by AI—a fake-sounding instrument, a weird lyric—rarely proved them right. Predictably, people did worse in genres they were less familiar with; some did okay on country or soul, but many stood no chance against jazz, classical piano, or pop. Beaty, the creativity researcher, scored 66%, while Brandt, the composer, finished at 50% (though he answered correctly on the orchestral and piano sonata tests).  Remember that the model doesn’t deserve all the credit here; these outputs could not have been created without the work of human artists whose work was in the training data. But with just a few prompts, the model generated songs that few people would pick out as machine-made. A few could easily have been played at a party without raising objections, and I found two I genuinely loved, even as a lifelong musician and generally picky music person. But sounding real is not the same thing as sounding original. The songs did not feel driven by oddities or anomalies—certainly not on the level of Beethoven’s “jump scare.” Nor did they seem to bend genres or cover great leaps between themes. In my test, people sometimes struggled to decide whether a song was AI-generated or simply bad.  How much will this matter in the end? The courts will play a role in deciding whether AI music models serve up replications or new creations—and how artists are compensated in the process—but we, as listeners, will decide their cultural value. To appreciate a song, do we need to picture a human artist behind it—someone with experience, ambitions, opinions? Is a great song no longer great if we find out it’s the product of AI?  Sanchez says people may wonder who is behind the music. But “at the end of the day, however much AI component, however much human component, it’s going to be art,” he says. “And people are going to react to it on the quality of its aesthetic merits.” In my experiment, though, I saw that the question really mattered to people—and some vehemently resisted the idea of enjoying music made by a computer model. When one of my test subjects instinctively started bobbing her head to an electro-pop song on the quiz, her face expressed doubt. It was almost as if she was trying her best to picture a human rather than a machine as the song’s composer. “Man,” she said, “I really hope this isn’t AI.”  It was. 
    0 Comments 0 Shares 138 Views
  • WWW.BUSINESSINSIDER.COM
    I'm a dietitian on the Mediterranean diet. Here are 12 things I buy at Trader Joe's when I don't feel like cooking.
    When I tell people I've been a registered dietitian for more than 20 years, the assumption is that I love to eat nutritious foods and cook them myself.I try to mostly eat balanced and nutrient-dense meals that follow the Mediterranean diet, but I don't enjoy spending time planning, cooking, and cleaning. Thankfully, Trader Joe's has some gems that help me feed my entire family (including a picky child).Here are 15 of my must-buys to help create healthy and easy meals without spending too much time in the kitchen.Editor's Note: Product price and availability may vary. Envy apples seem to stay fresh longer. Trader Joe's produce selection varies. Lauren Manaker Envy apples can be added to salads, "girl dinners," or lunchboxes for an extra crunch, a boost of fiber, and balanced sweetness.They're appealing because their insides tend to stay whiter longer, allowing for slicing or chopping without worrying too much about being stuck with discolored fruit. I buy Norwegian farm-raised salmon for a kick of healthy fats and protein. Salmon is a pretty versatile protein. Lauren Manaker Salmon from Norway is known for its pure taste, beautiful color, and firm flesh. Much of that is due to its balanced fat content and firm texture.It's also nutrient-dense, providing essentials such as omega 3; vitamins D, B12, and A; and selenium. Plus, it's incredibly easy to cook, especially if I remember to marinate it the night before. Clif Bars are my go-to for a boost of energy. Trader Joe's tends to carry an array of Clif bars. Lauren Manaker Not loving to cook also means not loving to prep snacks. But because I live an active lifestyle, I know I need to fuel myself with nutrients such as sustainable carbs before I start a workout.Clif Bars are crafted with a blend of plant-based protein, fat, and carbohydrates. They're my go-to pre-workout snack that requires zero effort in the kitchen. The vegan kale, cashew, and basil pesto tastes good on almost everything. Trader Joe's vegan pesto is one of my favorite buys. Lauren Manaker I don't follow a vegan diet, but that doesn't stop me from purchasing Trader Joe's pesto to use on pasta dishes, sandwiches, or as a dip.This pesto is also my secret ingredient in grilled-cheese sandwiches. Trader Joe's fruits-and-greens smoothie blend makes morning smoothies a breeze. Premade blends are great for easy smoothies. Lauren Manaker The blend of frozen produce makes my smoothie-making so easy.There's no chopping or prepping required when I'm in the mood for a breakfast smoothie — I simply toss some into a blender, add milk, and turn it on.  Vegan creamy dill dressing elevates a slew of dishes. Trader Joe's has some great vegan dressings and sauces. Lauren Manaker If it were socially acceptable to drink Trader Joe's vegan creamy dill dressing with a straw, I'd do it. I love that it's free from fillers or emulsifiers, and the flavor is incredibly satisfying. The obvious way to enjoy this dressing is on top of salad. However, I also use it as a saucy addition to chicken or fish meals, an ingredient in grain-based dishes, and a condiment on sandwiches. The organic Mediterranean-style salad kit helps us eat more veggies. Trader Joe's has an array of salad kits available. Lauren Manaker Making a salad isn't extremely labor-intensive. However, opening a salad kit and dumping all of the contents into a bowl is so much easier than procuring and chopping ingredients and coming up with the perfect flavor combo. Trader Joe's Mediterranean salad kit is packed with veggies and a corresponding dressing packet. I love pairing it with protein and starch for a balanced and healthy meal. Trader Joe's prepackaged veggie mixes come in handy. Some mixes have asparagus and mushrooms. Lauren Manaker Veggies are a must at dinnertime in my house. Having prewashed and cut veggie and produce kits, such as the Trader Joe's asparagus sauté, makes cooking dinner a breeze.Simply open the package and sauté everything in some extra-virgin olive oil.  The bulgur pilaf with butternut squash and feta cheese is an easy side dish. Trader Joe's frozen grains can be easy to cook. Lauren Manaker Whole grains can be both nutritious and filling. For those who don't like spending too much time in the kitchen, cooking them can be a tedious task.Precooked frozen grains, such as Trader Joe's bulgur pilaf, help save a ton of time because they just need to be heated through. Plus, this one is made with butternut squash, and I like the boost of veggies in every bite. Riced cauliflower stir-fry is a great base for a low-carb meal. Trader Joe's riced cauliflower stir-fry is fairly low-carb. Lauren Manaker Trader Joe's precooked cauliflower rice is a perfect base for a low-carb meal. I just add a protein for a complete dish. For people who don't love cauliflower rice — but tolerate it because they want to include more veggies in their diet (such as my husband) — mixing this dish with some regular rice can offer the best of both worlds. Hard-boiled eggs are a secret shortcut in my kitchen. I grab already cooked and peeled hard-boiled eggs when I find them. Lauren Manaker Precooked and shelled hard-boiled eggs make for an easy breakfast protein, salad topping, or sandwich addition. Trader Joe's Tarte au Brie et aux Tomates is my solution for pizza night. When I can find it, I grab Trader Joe's Tarte au Brie et aux Tomates. Lauren Manaker Yes, even dietitians want to have pizza night once in a while.Trader Joe's frozen Tarte au Brie et aux Tomates satisfies the fiercest pizza craving. Plus, heating it up takes less time than we'd spend waiting for a pie to be delivered from the local pizzeria.I enjoy one serving along with a side salad for a full meal. Click to keep reading Trader Joe's diaries like this one.This story was originally published on September 25, 2023, and most recently updated on April 16, 2025.
    0 Comments 0 Shares 109 Views
  • WWW.DAILYSTAR.CO.UK
    Overwatch 2 boss Aaron Keller on Stadium mode and huge new change to gameplay
    We spoke with Aaron Keller about Overwatch 2's switch to third-person for its new Stadium mode, and it'd be fair to say it was not as easy as simply switching camerasTech14:48, 16 Apr 2025Third-person perspective is new for Overwatch(Image: Blizzard)It feels as though 2025 is the year to forget all you know about Overwatch 2.Article continues belowThe game has added a return to 6v6 gameplay, while Blizzard took the decision to reveal multiple characters coming to the game earlier this year as well as in-game perks — and there are rumblings of a Netflix project.‌Stadium, however, perhaps marks the biggest departure since the sequel launched, promising revised maps, a new best-of-seven format, and big changes to the core Overwatch experience like switching to a Marvel Rivals-like third-person camera.If you thought that was easy, though, think again — ahead of Stadium's launch next week, Daily Star caught up with Game Director Aaron Keller at a roundtable interview this week to discuss the challenges of implementing third-person.Article continues below"People's movement abilities are amplified so just being able to track what enemies are doing is harder [in Stadium]," Keller explains."Pulling the camera out to a third person perspective allows us to be able to do that, and there are a lot of abilities too that might be a little bit more difficult to see in first person."There's a bit more lava on the ground and there are abilities like Zarya's bubbles that can do pulse damage that you can put onto teammates, and so it's a lot easier for them to recognize what's on them in third person," he adds..‌While Overwatch has had some instances of third-person, this is the first time it's for longer than a single ability(Image: Blizzard)"We had to do a lot of animation work for every hero. It wasn't just about positioning the camera. A lot of times it was positioning the hero, rotating the hero specifically by animation, and there was even a lot of custom animation for abilities and firing with some Heroes you couldn't see when they could 'Quick Melee'.‌"There's even work going into letting players know when they're reloading, so we needed to do UI elements for that, and there's even a lot of VGX things that we've been doing to make things read correctly."Thankfully, Keller feels the team's efforts have been rewarded."Overall I feel like we made a really great version of it [third-person Overwatch], and it's something that we're continuing to put time and energy into even for releases past Stadium's launch"Article continues belowFor the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.‌‌‌
    0 Comments 0 Shares 147 Views
  • METRO.CO.UK
    Silent Hill meets Dead Space in Bloober’s Cronos: The New Dawn trailer
    A time travel nightmare (Bloober Team) The developers behind the acclaimed Silent Hill 2 remake have released a new trailer for their next game, and it looks suitably horrifying. The expectations around developer Bloober Team have shifted drastically following last year’s excellent Silent Hill 2, which managed to live up to the original in every way. The team is hoping to carry that momentum into its next game, which is anoriginal title named Cronos: The New Dawn. The sci-fi survival horror was announced in October last year, but now a gameplay trailer has arrived showcasing its Dead Space-inspired DNA. Unlike Silent Hill, Cronos: The New Dawn looks to be more action-focused, with the footage showing a variety of weapons, including laser tripwires, heavy shotguns, and a burst rifle of some kind. The creepy monsters are highly reminiscent of Dead Space, with lots of gangly limbs, although dismemberment isn’t a key mechanic. Instead, Cronos features a ‘merge’ mechanic, where you have to burn the bodies of fallen monsters otherwise they can be absorbed into existing ones, making them stronger in the process. A synopsis reads: ‘In a grim world where Eastern European brutalism meets retro-futurist technology, you play as a Traveler tasked with scouring the wastelands of the future in search of time rifts that will transport you back to 1980s era Poland. More Trending ‘In the past, you will witness a world in the throe of The Change, a cataclysmic event that forever altered humanity. The future, meanwhile, is a ravaged wasteland overrun with nightmarish abominations.’ Developer Bloober Team is based in Krakow, Poland, so it’ll be interesting to see how the location feeds into the experience overall. Before Silent Hill 2, Bloober Team was known for games with a more mixed reputation, such as Layers Of Fear, The Medium, and 2019’s Blair Witch. If they can maintain the quality of Silent Hill 2 with Chronos though, that will be quite a turnaround for the studio. Cronos: The New Dawn is set to be released on Xbox Series X/S, PlayStation 5, and PC in 2025, with a specific release date yet to be announced. It’s set to launch in 2025 (Bloober Team) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comments 0 Shares 111 Views
  • GIZMODO.COM
    Black Mirror Director Haolu Wang on the AI Romance in ‘Hotel Reverie’
    Black Mirror dropped its seventh season last week, and while several of the new episodes share themes of love and romance, there are very few happy endings. The one entry that does offer last-act uplift is “Hotel Reverie,” which introduces a Hollywood star (Issa Rae) whose desire for meatier roles leads her into an AI world where she encounters… an entirely different sort of desire. The episode’s complex tech elements imagine that Rae’s character, Brandy Friday, is inserted into a vintage black-and-white movie—a sort of Casablanca-ish tale titled Hotel Reverie—as a way to “remake” the film with a contemporary star. Though it’s really just her consciousness that’s linked into the virtual recreation, thanks to cutting-edge AI Brandy’s experiences feel real. The world feels real. And her digitally crafted co-star, Clara (played in Hotel Reverie by tragic starlet Dorothy Chambers, and played in “Hotel Reverie” by Emma Corrin) feels extremely real. Like, “sentient with a soul and capable of falling in love” real. A new interview in the Hollywood Reporter with episode director Haolu Wang, as well as Rae and Rae’s co-star Awkwafina (who plays the head of the AI start-up behind the Hotel Reverie experiment), digs into the futuristic yet startlingly human emotional territory the episode, penned by Black Mirror creator Charlie Brooker, explores. “The story is fundamentally about two people finding a genuine connection and themselves in an entirely artificial setting,” Wang explained. “That contrast is really interesting, and it’s very moving because it talks about someone from now and someone from the past, both actors trapped in different ways who otherwise would have never met. They find each other in a similar boat and find themselves being able to be their true selves for the first time with each other, but in a limited place.” Wang continued. “What the episode discusses, broadly, around AI, is what actors are going through right now, and if this could actually happen to an actor that tried to reenact a role. What’s the psychological, and emotional implication for somebody who has just been used for two hours?” “The Brandy character in the end wants to keep a certain connection or longing, but that’s entirely limited. The more it feels real that Clara is there, the more we also know that it’s impossible. It gives people that profound feeling of, what if we use that technology on real people, on actors? What would the implication of that be?” In the end, “Hotel Reverie” comes up with a way for Brandy and Clara to keep in touch, even though Brandy is in the real world and Clara is entirely digital. There’s a little bit of “wait, how would that work exactly?” involved in the twist, but to Wang, “The connection is what matters. That’s what stays with you that will never go away, even though you’ll never quite have your person ever again. The ending is tonally bittersweet, in a way that is not totally tragic. It’s about what kind of feeling you want to leave the audience with and, for this, it’s longing.” You can watch “Hotel Reverie” and the rest of Black Mirror on Netflix. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
    0 Comments 0 Shares 134 Views
  • WWW.ARCHDAILY.COM
    Soskil House / Ludwig Godefroy architect
    Soskil House / Ludwig Godefroy architectSave this picture!© Nicolas Rangel RonquilloHouses•Mérida, Mexico Architects: Ludwig Godefroy Architecture Area Area of this architecture project Area:  250 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Nicolas Rangel Ronquillo Lead Architects: Ludwig Godefroy More SpecsLess Specs Save this picture! Text description provided by the architects. Casa Soskil is a house conceived from its negative space. Let me explain. This project, instead of being designed around the built, habitable spaces—the positive space that houses the home—was conceived in reverse, starting from the void that defines its garden. This empty garden space is the fundamental element that protects the house and all its interior spaces.Save this picture!Save this picture!The project's starting point was to control the views from the neighbors by creating floating geometric shapes around the pre-existing trees on the site. These shapes create large openings where the garden can freely grow, meanwhile blocking intrusive views. These large voids not only regulate the neighbors' presence but also create a strong sense of interiority outside the garden. A feeling of well-being envelops every space of the house to make people feel they live among the trees.Save this picture!Save this picture!Casa Soskil structures the negative space of its void, the garden is no longer the leftover space we didn't build; but the other way around, the void controls the built area of the house. The project also considers another important characteristic of the site: the land naturally has two poles—one of light and one of shade. At the front, direct sunlight shines intensely, while at the back, the tree's foliage filters the light and provides shade. Two complementary atmospheres exist on the site.Save this picture!Save this picture!Casa Soskil embraces and makes this natural contrast its own, creating at the front a sunlit social space around the swimming pool, contrasted by a shaded social space at the back. The project turns the site's inherent qualities into different architectural atmospheres, everywhere inside of the house.Save this picture!Casa Soskil deconstructs and fragments its interior space, creating a first social pole of light and gathering at the front, and a second social pole of shade and relaxation at the back. These two poles are connected through a walk in the garden—the green lung of the house.Save this picture!Save this picture!The social pole of light is the active hub of the house, including a terrace/solarium and a swimming pool, while the social pole of shade is the meditative space, featuring a study, a daybed room for napping, and a fire pit. By responding to the site's natural conditions in this way, the conventional layout of living space was dismantled. The entire ground floor, its garden, and each of its trees—stretching from the entrance door to the back wall—became one large open living area. Meanwhile, the bedrooms are inserted like treehouses among the site's existing trees.Save this picture!There is no longer a border between indoors and outdoors. The garden becomes the living room in its entirety. Casa Soskil reverses the traditional house with its garden, to create a garden with its house.Save this picture! Project gallerySee allShow less About this office Published on April 16, 2025Cite: "Soskil House / Ludwig Godefroy architect" 16 Apr 2025. ArchDaily. Accessed . <https://www.archdaily.com/1029075/casa-soskil-ludwig-godefroy-architect&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Comments 0 Shares 137 Views
  • WWW.YOUTUBE.COM
    Inflating Abstract Objects in Cinema 4D⭐Tutorial + Project File
    Inflating Abstract Objects in Cinema 4D⭐Tutorial + Project File 👉 https://cgshortcuts.com/inflating-abstract-objects-in-cinema-4d Using dynamics we’ll inflate and deflate abstract shapes Cinema 4D – C4D Tutorial and project file. #Cinema4D #C4D #Redshift #CGShortcuts
    0 Comments 0 Shares 127 Views
  • WWW.DISCOVERMAGAZINE.COM
    How Crocodiles Have Survived Over 230 Million Years and Two Mass Extinction Events
    Some 215 million years ago in what is now northwestern Argentina, the terrestrial crocodylomorph Hemiprotosuchus leali prepares to devour the early mammal relative Chaliminia musteloides. (Image Credit: Jorge Gonzalez) NewsletterSign up for our email newsletter for the latest science newsCrocodiles are persistent — not just in their deadly pursuit of prey, but in terms of their existence. The contemporary species hails from a 230-million-year lineage that has survived two mass extinction events.A study in the journal Palaeontology identifies flexibility as a key to their longevity. Crocodylians that survived over millions of years can eat a variety of foods and live in multiple habitats. Understanding this level of adaptability could help threatened species survive. “Extinction and survivorship are two sides of the same coin. Through all mass extinctions, some groups manage to persist and diversify. What can we learn by studying the deeper evolutionary patterns imparted by these events?” said Keegan Melstrom, professor at the University of Central Oklahoma and an author of the study, which she began as a graduate student there, in a press release. Crocodiles as Living FossilsCrocodylians are often referred to as “living fossils.” But that may be a bit of a misnomer, because that label suggests lack of change. The study of how they survived so long runs counter to that. The creatures have prevailed for so long because they’ve managed to change where they live and what they eat, even as the world around them shifts.That happened during two mass extinction events. The first was during the end-Triassic, about 201.4 million years ago. The second was at the end-Cretaceous, about 66 million years ago.Read More: The 5 Mass Extinctions That Have Swept Our PlanetEvolution After Mass Extinction EventsDuring the Late Triassic Period (237 million years to 201.4 million years) Pseudosuchia, a broad group that includes early crocodylomorphs and many other extinct lineages dominated. The crocodylomorphs then were small-to-medium-sized creatures, relatively rare, and mostly ate small animals, likely in the water. Other pseudosuchian groups dominated the land, and came in a wide range of body shapes and sizes. But this level of specialization probably did them in during the end-Triassic extinction, leaving crocodylomorphs as one of the most dominant and adaptable species remaining.“After that, it goes bananas,” Melstrom said in the release. “Aquatic hypercarnivores, terrestrial generalists, terrestrial hypercarnivores, terrestrial herbivores — crocodylomorphs evolved a massive number of ecological roles throughout the time of the dinosaurs.”Crocodylomorph species began a slow decline during the Late Cretaceous Period, with the more specialized species fading out. After a meteor contributed to a mass extinction event that killed of all the dinosaurs, only the aquatic and semi-aquatic crocodylomorphs remained.Crocodile Diets Over TimeThe scientists determined crocodylian diets over millions of years by analyzing skull and teeth shape of different species over time. Jaws lined with sharp dagger-like teeth were most likely associated with carnivores, while others set with the dental equivalent of mortar and pestles likely ground plant matter into digestible food.To glean an idea of the animal’s diet over time, the researchers examined the skulls of 99 extinct crocodylomorph species and 20 living crocodylian species. They visited zoological and paleontological museum collections across seven countries and four continents, ultimately examining the skulls of 99 extinct crocodylomorph species and 20 living crocodilian species.They then created a fossil database covering 230 million years. Next, they compared it to a previous dataset that of living non-crocodylians including 89 mammals and 47 lizard species. The specimens represented a range of dietary ecologies, from strict carnivores to obligate herbivores, and a wide variety of skull shapes.Today’s 26 species of living crocodylians are nearly all semiaquatic generalists. This lends some credence — at least in evolutionary terms — to the saying that it is better to be a jack of all trades and rather than a master of none.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Before joining Discover Magazine, Paul Smaglik spent over 20 years as a science journalist, specializing in U.S. life science policy and global scientific career issues. He began his career in newspapers, but switched to scientific magazines. His work has appeared in publications including Science News, Science, Nature, and Scientific American.1 free article leftWant More? Get unlimited access for as low as $1.99/monthSubscribeAlready a subscriber?Register or Log In1 free articleSubscribeWant more?Keep reading for as low as $1.99!SubscribeAlready a subscriber?Register or Log In
    0 Comments 0 Shares 111 Views
  • WWW.POPSCI.COM
    Exoplanet with two ‘suns’ is even more unique than Tatooine
    This is an artist’s impression of the exoplanet 2M1510 (AB) b’s unusual orbit around its host stars, a pair of brown dwarfs. The newly discovered planet has a polar orbit, which is perpendicular to the plane in which the two stars are travelling. Polar planets around single stars had been found before, as well as polar discs of gas and dust capable of forming planets around binary stars. But thanks to ESO’s Very Large Telescope (VLT) this is the first time we have strong evidence that such a planet actually exists in a polar orbit around two stars. The two brown dwarfs appear as a single source in the sky, but astronomers know there are two of them because they periodically eclipse each other. Using the UVES spectrograph on the VLT they measured their orbital speed, and noticed that their orbits change over time. After carefully ruling out other explanations, they concluded that the gravitational tug of a planet in a polar orbit was the only way to explain the motion of the brown dwarfs. Credit: ESO / L. Calçada Get the Popular Science daily newsletter💡 The image of Luke Skywalker gazing wistfully across the desert of Tatooine while a pair of “suns” set on the horizon is among of the most famous scenes in pop culture. When Star Warscircumbinary planet in 1993. Still, only 15 examples have been located to date—but researchers now have strong evidence suggesting another should be added to the list. What’s more, it’s one of the most unique binary systems ever observed. The evidence was published on April 16 in the journal Science Advances. When you imagine a stellar system and its orbiting planets, chances are it largely resembles our own solar system. In actuality, our cosmic neighborhood is actually a comparatively rare sight. Of the nearly 6,000 exoplanets documented so far, more than 75 percent exist in stellar orbits radically different from our own. One of the rarest variations is a binary system, in which the path of a planet revolves around two stars. However, the exoplanet 2M1510’s binary system includes a slightly different pairing: brown dwarfs. Brown dwarfs are cosmic oddballs—while too large to classify as planets, they’re also too small to truly meet the definition of a star. But at 13 to 80 times the mass of Jupiter, they exhibit more than enough gravitational pull to draw objects into orbit. 2M1510 is one such object. However,  in this case, there are two brown dwarfs involved here. The result, according to researchers, is an exoplanet that “eccentrically orbits” the pair. To confirm the rarely detected space oddity, astronomers utilized radial velocity calculations to examine data previously collected by NASA’s Kepler space telescope, the Transiting Exoplanet Survey Satellite (TESS), as well as the European Space Southern Observatory’s Very Large Telescope (VLT). This combination of tools and analyses allowed astronomers to bypass  a longstanding physic issue called the three-body problem. This conundrum makes it extremely difficult to assess gravitational behavior between three objects interacting in space. The resulting evidence strongly suggests 2M1510 orbits at a 90-degree polar angle, and moves perpendicular to the dwarfs’ orbits in a never-before-seen way. Adding to its uniqueness, the brown dwarfs are eclipsing, meaning that one of them is always partially obscured when seen from Earth. This also makes it only the second eclipsing brown dwarf binary system ever documented. “A planet orbiting not just a binary, but a binary brown dwarf, as well as being on a polar orbit is rather incredible and exciting,” Amaury Triaud, a study co-author and professor at the University of Birmingham, said in a statement. According to an accompanying Science Advances “Focus” feature, the system also “provides strong, if indirect, evidence for the existence of one of the most exotic types of exoplanetary systems yet found.”But as rare a find as it is, it’s possible that 2M1510 once had an even more surreal skyscape. That’s because they aren’t only two brown dwarfs in the cosmic neighborhood. A third, more distant brown dwarf is located at the system’s periphery. According to the study’s authors, this hints at a time when a trio of brown dwarfs occupied the system’s center before the gravitational forces pushed it out of the unit. At this point, however, only time will tell if a three-star planet makes it into a Star Wars film.
    0 Comments 0 Shares 115 Views
  • WWW.NATURE.COM
    Within dead branches
    Nature, Published online: 16 April 2025; doi:10.1038/d41586-025-01178-wTreading familiar ground.
    0 Comments 0 Shares 118 Views