• TECHREPORT.COM
    Nvidia Launches Three 5060 Series GPUs. Here’s Why We Think It Could Be a Disaster in the Making
    Key Takeaways Nvidia has announced three new 5060 series GPUs. The 8GB variants aren’t available for critic reviews, raising suspicion of an underperforming product. Nvidia claims 2x faster frame rates, which we and other critics found misleading. Nvidia just announced the launch of three new graphics cards: the RTX 5060 Ti 16GB, the RTX 5060 Ti 8GB, and the RTX 5060. Both variants of the RTX 5060 Ti will be available from April 16, whereas the RTX 5060 will come out sometime in May. The RTX 5060 Ti is priced at $429 – the lowest Nvidia has ever gone for a 16GB GPU. This is also about 22% cheaper than the RTX 5070 when it launched at $549. The 8GB variant of the Ti model is priced at $379, whereas the RTX 5060 will cost $299. This is where things start to get a bit damp. It’s not obvious to many why anyone would prefer the 8GB variant for just a $50 price difference. This may have been done purposefully to downplay its 8GB 5060 series GPU variants. We have another reason for this school of thought. Usually, Nvidia GPU launches are preceded by a full-fledged review schedule, where different reviewers are handed out samples before the official release. However, that’s not the case this time around. Nvidia seems to have specifically blocked the supply of the 8GB Ti variant for reviews. So, only the 16GB variant will be reviewed by various tech houses. Also, although the official release mentions April 16 as the official release date for both variants, the 8GB version may come out a ‘few weeks’ after the 16GB one. It seems like Nvidia wants to shift the spotlight to the RTX 5060 Ti 16GB and ‘protect’ the 8GB variants from reviewers’ wrath. You’d ask, why the ‘wrath’? Well, the 8GB GPU variants do not have enough VRAM to handle the requirements of modern games. This leads to lower texture quality, stuttering, and poor performance. Although you can still play games at 1080p on these GPUs, they aren’t future-proof. This is why industry experts and gamers suggest 12GB GPUs. Nvidia is aware of this. The whole ‘protecting the 8GB variant’ play is being done because they know they’ve built a substandard product that will be bashed by reviewers, affecting its sales. Instead, Nvidia wants to push the 16GB variant, get positive reviews on it, and then put the 8GB 5060 GPUs on the store shelves. Now, innocent buyers who are oblivious to the specifications may end up buying the 8GB versions since they’re cheaper, only to find they’ve been scammed. Performance Performance-wise, the RTX 5060 Ti 16GB GPU seems to offer the best value for gamers. It’s 20% faster than the RTX 4060 Ti and 30% faster than the 3060 Ti variant. Plus, after adjusting for inflation, it offers 15-20% lower cost per frame than the RTX 4070 and is 33% cheaper than the RTX 3060 Ti. The 8GB Ti variant offers similar core performance. However, less VRAM is a significant bottleneck that can affect performance on various modern games. The RTX 5060 is also claimed to be 20-25% faster than the RTX 4060 and 30% faster than the RTX 3060, with a 40% lower cost per frame. However, the VRAM drops from 12GB on the RTX 3060 to 8GB on the RTX 5060, which makes it an overpriced GPU at this price. Nvidia’s Marketing Gimmick In addition to the review and price blunder Nvidia has made with this launch, another laughable marketing gimmick is going around. Nvidia claims that the new 50 series GPUs offer 2x the frame rates of the previous models. Good, right? Nope. Nvidia has, very smartly, used the words ‘frame rate’ instead of ‘performance.’ And even in doing so, the numbers are way off. The official launch page says that the frame rate on the 5060 Ti is 171 compared to the 4060 Ti’s 87, and the latency is down from 48 to 47. However, if you look at the small font of caveat at the bottom (something you may need a microscope for), it says that this performance was achieved on DLSS Quality Mode and by using the max frame gen level supported by each GPU. As you might already know, DLSS uses AI to upscale a lower-res image to make it look like a higher-res one. So, Nvidia claims that Wukong runs at 102 FPS in the new GPU. However, in reality, the render rate is below 30 FPS – a sick marketing joke. This also isn’t the first time Nvidia has tried to deceive users. They had earlier said that the 5090 GPU variants would render twice the performance of the 4090 series, which was never the case. That’s all we could muster from the one-page official Nvidia release, and it isn’t looking like a great release for what is the ‘leading AI chip manufacturer.’ We’ll have to wait for all the GPUs to hit the stores and see how they perform in real-world settings. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide.  A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Comentários 0 Compartilhamentos 56 Visualizações
  • WWW.TECHSPOT.COM
    Ubisoft Chroma helps developers simulate color blindness across all game engines
    Something to look forward to: People with color blindness have varying degrees of difficulty seeing or distinguishing certain colors. Gamers affected by color blindness often rely on specific accessibility options to fully enjoy their on-screen experience. Ubisoft's latest release, however, could significantly improve their experience – not just in one game, but across a broader range of titles. Ubisoft recently introduced Chroma, an open-source tool designed to simulate various types of color blindness. According to the French publisher, around 300 million people worldwide are affected by color vision deficiency – many of whom are gamers who spend significant time engaging with rich and vibrant digital environments. With Chroma, developers can simulate the three main types of color blindness: Protanopia, Deuteranopia, and Tritanopia. Ubisoft has already used the tool internally across several game projects, supporting its accessibility team during complex testing scenarios. Notably, Chroma is designed to work across all games, with no dependencies on specific game engines or platforms. The tool boasts additional features such as accurate visual simulation and real-time rendering at up to 60 FPS. While 60 FPS may not be considered "high-performance" by modern gaming standards, it represents a reasonable tradeoff in the context of accessibility – especially when the alternative is an inaccurate or incomplete visual experience. It's also possible that this frame rate applies only to the simulation tool during development, rather than affecting performance in final game builds. Chroma also offers live gameplay recording, screenshot capture, a configurable UI, and more. The tool works by applying a filter over the game's graphics to simulate color blindness, Ubisoft explained. Developed since 2021 by the company's Quality Control team in India, Chroma uses the Color Oracle algorithm and supports both single and dual screen setups. It provides several hotkeys and a customizable overlay to streamline testing. According to Jawad Shakil, Ubisoft's Quality Control Product Manager, Chroma was designed to integrate color-blind accessibility into the creative and testing process from the earliest stages of game development. The QC team devoted extensive effort to ensure the tool eliminated lag and minimized visual inaccuracies. // Related Stories Ubisoft is now releasing Chroma under an open-source license, giving other developers a new option to enhance accessibility in their own games.
    0 Comentários 0 Compartilhamentos 55 Visualizações
  • WWW.DIGITALTRENDS.COM
    QLED markdown: Score the 65-inch Sony Bravia 7 while it has a $600 discount
    Sony makes some of the best TVs on the market in 2025, and most of the latest and greatest models (first announced at CES) haven’t even hit shelves yet! This means you’ll be able to score midrange and premium 2024 models for super-good prices, especially when there’s a sale. As luck would have it, the Sony 65-inch Bravia 7 Series 4K QLED is marked down to $1,400 from its original price of $1,900. We tested the Bravia 7 back in November 2024, and editor at large Caleb Denison gave the QLED a 4 out of 5 star rating. “The Bravia 7 has insanely great picture quality” is the major takeaway from his video review and writeup, and Sony’s thoughtful engineering can be thanked. The Bravia 7 delivers bright and bold picture quality with rich, lifelike colors and fantastic contrast levels that rival some of the best OLED TVs out there.  Related The Bravia 7 has a terrific local dimming system that allows the TV to achieve pure, inky blacks during dark scenes while still emphasizing highlights and other picture details. The TV gets bright enough to watch SDR content during the day without sun or other ambient light sources muddying the picture. The TV doesn’t have the best reflection handling though, so it’s best to keep lamps at least a few feet away from the screen.  Apps, casting, and voice assistant features run on Google TV OS, a fast and intuitive smart hub that’s packed with apps, free live TV stations, and even smart home controls (for compatible devices). The TV also supports HDMI 2.1 connectivity and VRR and ALLM support, making it an excellent choice for gaming! Save $500 on the Sony 65-inch Bravia 7 Series 4K QLED when you purchase today. We also recommend taking a look at our lists of the best Sony TV deals, best QLED TV deals, and best TV deals for even more discounts on top Sony sets.  Editors’ Recommendations
    0 Comentários 0 Compartilhamentos 52 Visualizações
  • WWW.WSJ.COM
    ‘Abundance’ Review: Supply-Side Liberalism
    Two progressive intellectuals explain where the American left went wrong—but without a hint of self-criticism or admission of past errors.
    0 Comentários 0 Compartilhamentos 52 Visualizações
  • ARSTECHNICA.COM
    LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions
    Tuning in to your emotions LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions LG licenses tech for interpreting TV users' feelings and convictions. Scharon Harding – Apr 16, 2025 4:14 pm | 17 Credit: Getty Credit: Getty Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers' personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them. The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a Software as a Service marketing platform that can drive advertiser sales “with AI powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday. The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse's tech to “expand new software development and go-to-market products," it said. LG didn’t specify the duration of its licensing deal with Zenapse. Zenapse’s platform for connected TVs (CTVs), ZenVision, is supposed to be able to interpret the types of emotions shown in the content someone is watching on TV, partially by using publicly available information about the show's or movie’s script and plot, StreamTV Insider reported. ZenVision also analyzes viewer behavior, grouping viewers based on their consumption patterns, the publication noted. Under the new partnership, ZenVision can use data that LG has gathered from the automatic content recognition software in LG TVs. With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or "emotionally engaged planners," an LG spokesperson told StreamTV Insider. Zenapse's website for ZenVision points to other potential market segments, including "digital adopters," "wellness seekers," "positive impact & environment," and "money matters." Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it. This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data. “As viewers engage with content, ZenVision's understanding of a consumer grows deeper, and our... segmentation continually evolves to optimize predictions,” the ZenVision website says. Getting emotional LG’s partnership comes as advertisers struggle to appeal to TV viewers’ emotions. Google, for example, attempted to tug at parents’ heartstrings with the now-infamous Dear Sydney ad aired during the 2024 Summer Olympics. Looking to push Gemini, Google hit all the wrong chords with parents, and, after much backlash, plucked the ad. The partnership also comes as TV OS operators seek new ways to use smart TVs to grow their own advertising businesses and to get people to use TVs to buy stuff. With their ability to track TV viewers' behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG's announcement pointed out, CTVs represent "one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023." But as advertisers' interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy. LG is one of the biggest global TV brands, so its plan to distribute emotionally driven ads to the 200 million LG TVs currently in people's homes could have a ripple effect. Further illustrating LG TVs' dominance, webOS is estimated to be in 35 percent of US homes, per data that Hub Entertainment Research shared this week. As such, LG's foray into advertising driven by AI’s ability to understand and appeal to viewer emotions could lead to other CTV OSes following suit. For its part, LG thinks it can use Zenapse's tech to make "future innovations that could shape new emotionally intelligent experiences for the TV screen," a spokesperson told StreamTV Insider. As it stands, targeted ads are a divisive approach to what we might consider a necessary evil: advertising. While targeted ads rely on tracking techniques that many find invasive, they could also result in ads that are more relevant and less annoying to the people seeing them. In cases where advertising is inevitable, some prefer ads that appeal on a personal level over messaging that can be inappropriate or, even, disturbing and offensive. At this stage, we don’t know how the ads shown on LG’s webOS might evolve with Zenapse’s technology. But it seems like LG and, likely, other smart TV OS operators will try to strengthen their abilities to understand your convictions, beliefs, and values. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 17 Comments
    0 Comentários 0 Compartilhamentos 70 Visualizações
  • WWW.NEWSCIENTIST.COM
    Lab-grown chicken could be made chewier using artificial capillaries
    A machine delivers a nutrient-rich liquid to artificial chicken fibresShoji Takeuchi The University of Tokyo A thick, bite-sized piece of chicken fillet has been grown in a lab using tiny tubes to mimic the capillaries found in real muscle. Researchers say this gives the product a chewier texture. When growing thick pieces of cultured meat, one major problem is that cells in the centre don’t get enough oxygen or nutrients, so they die and break down, says Shoji Takeuchi at the University of Tokyo. Advertisement “This leads to necrosis and makes it hard to grow meat with good texture and taste,” he says. “Our goal was to solve this by creating a way to feed cells evenly throughout the tissue, just like blood vessels do in the body. We thought, ‘What if we could create artificial capillaries using hollow fibres?’” The fibres used by Takeuchi and his colleagues were inspired by similar hollow tubes used in the medical industry, such as for kidney dialysis. To create the lab-grown meat, the team essentially wanted to create an artificial circulatory system. “Dialysis fibres are used to filter waste from blood,” says Takeuchi. “Our fibres are designed to feed living cells.” First, the researchers 3D-printed a small frame to hold and grow the cultured meat, attaching more than 1000 hollow fibres using a robotic tool. They then embedded this array into a gel containing living cells. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter “We created a ‘meat-growing device’ using our hollow-fibre array,” says Takeuchi. “We put living chicken cells and collagen gel around the fibres. Then we flowed nutrient-rich liquid inside the hollow fibres, just like blood flows through capillaries. Over several days, the cells grew and aligned into muscle tissue, forming a thick, steak-like structure.” The resulting cultured chicken meat weighed 11 grams and was 2 centimetres thick. The tissue had muscle fibres aligned in one direction, which improves texture, says Takeuchi. “We also found that the centre of the meat stayed alive and healthy, unlike past methods, where the middle would die.” While the meat wasn’t considered suitable for a human taste test, a machine analysis showed it had good chewiness and flavour markers, says Takeuchi. Manipulating the hollow fibres may also make it possible to simulate different cuts of meat, he says. “By changing the fibre spacing, orientation or flow patterns, we may be able to mimic different textures, like more tender or more chewy meat.” Johannes le Coutre at the University of New South Wales in Sydney says that while it is impressive research, the process would be difficult to carry out on an industrial scale. “[The] holy grail in this whole field is scaling up of new technology,” he says. Journal reference:Trends in Biotechnology DOI: 10.1016/j.tibtech.2025.02.022 Topics:food science
    0 Comentários 0 Compartilhamentos 69 Visualizações
  • WWW.TECHNOLOGYREVIEW.COM
    AI is coming for music, too
    The end of this story includes samples of AI-generated music. Artificial intelligence was barely a term in 1956, when top scientists from the field of computing arrived at Dartmouth College for a summer conference. The computer scientist John McCarthy had coined the phrase in the funding proposal for the event, a gathering to work through how to build machines that could use language, solve problems like humans, and improve themselves. But it was a good choice, one that captured the organizers’ founding premise: Any feature of human intelligence could “in principle be so precisely described that a machine can be made to simulate it.”  In their proposal, the group had listed several “aspects of the artificial intelligence problem.” The last item on their list, and in hindsight perhaps the most difficult, was building a machine that could exhibit creativity and originality. At the time, psychologists were grappling with how to define and measure creativity in humans. The prevailing theory—that creativity was a product of intelligence and high IQ—was fading, but psychologists weren’t sure what to replace it with. The Dartmouth organizers had one of their own. “The difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness,” they wrote, adding that such randomness “must be guided by intuition to be efficient.”  Nearly 70 years later, following a number of boom-and-bust cycles in the field, we now have AI models that more or less follow that recipe. While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains. By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman.  Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. AI-generated creative works—from orchestra performances to heavy metal—are poised to suffuse our lives more thoroughly than any other product of AI has done yet. The songs are likely to blend into our streaming platforms, party and wedding playlists, soundtracks, and more, whether or not we notice who (or what) made them.  For years, diffusion models have stirred debate in the visual-art world about whether what they produce reflects true creation or mere replication. Now this debate has come for music, an art form that is deeply embedded in our experiences, memories, and social lives. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI.  The courts are actively grappling with this murky territory. Major record labels are suing the top AI music generators, alleging that diffusion models do little more than replicate human art without compensation to artists. The model makers counter that their tools are made to assist in human creation.   In deciding who is right, we’re forced to think hard about our own human creativity. Is creativity, whether in artificial neural networks or biological ones, merely the result of vast statistical learning and drawn connections, with a sprinkling of randomness? If so, then authorship is a slippery concept. If not—if there is some distinctly human element to creativity—what is it? What does it mean to be moved by something without a human creator? I had to wrestle with these questions the first time I heard an AI-generated song that was genuinely fantastic—it was unsettling to know that someone merely wrote a prompt and clicked “Generate.” That predicament is coming soon for you, too.  Making connections After the Dartmouth conference, its participants went off in different research directions to create the foundational technologies of AI. At the same time, cognitive scientists were following a 1950 call from J.P. Guilford, president of the American Psychological Association, to tackle the question of creativity in human beings. They came to a definition, first formalized in 1953 by the psychologist Morris Stein in the Journal of Psychology: Creative works are both novel, meaning they present something new, and useful, meaning they serve some purpose to someone. Some have called for “useful” to be replaced by “satisfying,” and others have pushed for a third criterion: that creative things are also surprising.  Later, in the 1990s, the rise of functional magnetic resonance imaging made it possible to study more of the neural mechanisms underlying creativity in many fields, including music. Computational methods in the past few years have also made it easier to map out the role that memory and associative thinking play in creative decisions.  What has emerged is less a grand unified theory of how a creative idea originates and unfolds in the brain and more an ever-growing list of powerful observations. We can first divide the human creative process into phases, including an ideation or proposal step, followed by a more critical and evaluative step that looks for merit in ideas. A leading theory on what guides these two phases is called the associative theory of creativity, which posits that the most creative people can form novel connections between distant concepts. STUART BRADFORD “It could be like spreading activation,” says Roger Beaty, a researcher who leads the Cognitive Neuroscience of Creativity Laboratory at Penn State. “You think of one thing; it just kind of activates related concepts to whatever that one concept is.” These connections often hinge specifically on semantic memory, which stores concepts and facts, as opposed to episodic memory, which stores memories from a particular time and place. Recently, more sophisticated computational models have been used to study how people make connections between concepts across great “semantic distances.” For example, the word apocalypse is more closely related to nuclear power than to celebration. Studies have shown that highly creative people may perceive very semantically distinct concepts as close together. Artists have been found to generate word associations across greater distances than non-artists. Other research has supported the idea that creative people have “leaky” attention—that is, they often notice information that might not be particularly relevant to their immediate task.  Neuroscientific methods for evaluating these processes do not suggest that creativity unfolds in a particular area of the brain. “Nothing in the brain produces creativity like a gland secretes a hormone,” Dean Keith Simonton, a leader in creativity research, wrote in the Cambridge Handbook of the Neuroscience of Creativity.  The evidence instead points to a few dispersed networks of activity during creative thought, Beaty says—one to support the initial generation of ideas through associative thinking, another involved in identifying promising ideas, and another for evaluation and modification. A new study, led by researchers at Harvard Medical School and published in February, suggests that creativity might even involve the suppression of particular brain networks, like ones involved in self-censorship.  So far, machine creativity—if you can call it that—looks quite different. Though at the time of the Dartmouth conference AI researchers were interested in machines inspired by human brains, that focus had shifted by the time diffusion models were invented, about a decade ago.  The best clue to how they work is in the name. If you dip a paintbrush loaded with red ink into a glass jar of water, the ink will diffuse and swirl into the water seemingly at random, eventually yielding a pale pink liquid. Diffusion models simulate this process in reverse, reconstructing legible forms from randomness. For a sense of how this works for images, picture a photo of an elephant. To train the model, you make a copy of the photo, adding a layer of random black-and-white static on top. Make a second copy and add a bit more, and so on hundreds of times until the last image is pure static, with no elephant in sight. For each image in between, a statistical model predicts how much of the image is noise and how much is really the elephant. It compares its guesses with the right answers and learns from its mistakes. Over millions of these examples, the model gets better at “de-noising” the images and connecting these patterns to descriptions like “male Borneo elephant in an open field.”  Now that it’s been trained, generating a new image means reversing this process. If you give the model a prompt, like “a happy orangutan in a mossy forest,” it generates an image of random white noise and works backward, using its statistical model to remove bits of noise step by step. At first, rough shapes and colors appear. Details come after, and finally (if it works) an orangutan emerges, all without the model “knowing” what an orangutan is. Musical images The approach works much the same way for music. A diffusion model does not “compose” a song the way a band might, starting with piano chords and adding vocals and drums. Instead, all the elements are generated at once. The process hinges on the fact that the many complexities of a song can be depicted visually in a single waveform, representing the amplitude of a sound wave plotted against time.  Think of a record player. By traveling along a groove in a piece of vinyl, a needle mirrors the path of the sound waves engraved in the material and transmits it into a signal for the speaker. The speaker simply pushes out air in these patterns, generating sound waves that convey the whole song.  From a distance, a waveform might look as if it just follows a song’s volume. But if you were to zoom in closely enough, you could see patterns in the spikes and valleys, like the 49 waves per second for a bass guitar playing a low G. A waveform contains the summation of the frequencies of all different instruments and textures. “You see certain shapes start taking place,” says David Ding, cofounder of the AI music company Udio, “and that kind of corresponds to the broad melodic sense.”  Since waveforms, or similar charts called spectrograms, can be treated like images, you can create a diffusion model out of them. A model is fed millions of clips of existing songs, each labeled with a description. To generate a new song, it starts with pure random noise and works backward to create a new waveform. The path it takes to do so is shaped by what words someone puts into the prompt. Ding worked at Google DeepMind for five years as a senior research engineer on diffusion models for images and videos, but he left to found Udio, based in New York, in 2023. The company and its competitor Suno, based in Cambridge, Massachusetts, are now leading the race for music generation models. Both aim to build AI tools that enable nonmusicians to make music. Suno is larger, claiming more than 12 million users, and raised a $125 million funding round in May 2024. The company has partnered with artists including Timbaland. Udio raised a seed funding round of $10 million in April 2024 from prominent investors like Andreessen Horowitz as well as musicians Will.i.am and Common. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. Suno has artist pages for creators, some with large followings, who generate songs entirely with AI, often accompanied by AI-generated images of the artist. These creators are not musicians in the conventional sense but skilled prompters, creating work that can’t be attributed to a single composer or singer. In this emerging space, our normal definitions of authorship—and our lines between creation and replication—all but dissolve. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. The music industry is pushing back. Both companies were sued by major record labels in June 2024, and the lawsuits are ongoing. The labels, including Universal and Sony, allege that the AI models have been trained on copyrighted music “at an almost unimaginable scale” and generate songs that “imitate the qualities of genuine human sound recordings” (the case against Suno cites one ABBA-adjacent song called “Prancing Queen,” for example).  Suno did not respond to requests for comment on the litigation, but in a statement responding to the case posted on Suno’s blog in August, CEO Mikey Shulman said the company trains on music found on the open internet, which “indeed contains copyrighted materials.” But, he argued, “learning is not infringing.” A representative from Udio said the company would not comment on pending litigation. At the time of the lawsuit, Udio released a statement mentioning that its model has filters to ensure that it “does not reproduce copyrighted works or artists’ voices.”  Complicating matters even further is guidance from the US Copyright Office, released in January, that says AI-generated works can be copyrighted if they involve a considerable amount of human input. A month later, an artist in New York received what might be the first copyright for a piece of visual art made with the help of AI. The first song could be next.   Novelty and mimicry These legal cases wade into a gray area similar to one explored by other court battles unfolding in AI. At issue here is whether training AI models on copyrighted content is allowed, and whether generated songs unfairly copy a human artist’s style.  But AI music is likely to proliferate in some form regardless of these court decisions; YouTube has reportedly been in talks with major labels to license their music for AI training, and Meta’s recent expansion of its agreements with Universal Music Group suggests that licensing for AI-generated music might be on the table.  If AI music is here to stay, will any of it be any good? Consider three factors: the training data, the diffusion model itself, and the prompting. The model can only be as good as the library of music it learns from and the descriptions of that music, which must be complex to capture it well. A model’s architecture then determines how well it can use what’s been learned to generate songs. And the prompt you feed into the model—as well as the extent to which the model “understands” what you mean by “turn down that saxophone,” for example—is pivotal too. Is the result creation or simply replication of the training data? We could ask the same question about human creativity. Arguably the most important issue is the first: How extensive and diverse is the training data, and how well is it labeled? Neither Suno nor Udio has disclosed what music has gone into its training set, though these details will likely have to be disclosed during the lawsuits.  Udio says the way those songs are labeled is essential to the model. “An area of active research for us is: How do we get more and more refined descriptions of music?” Ding says. A basic description would identify the genre, but then you could also say whether a song is moody, uplifting, or calm. More technical descriptions might mention a two-five-one chord progression or a specific scale. Udio says it does this through a combination of machine and human labeling.  “Since we want to target a broad range of target users, that also means that we need a broad range of music annotators,” he says. “Not just people with music PhDs who can describe the music on a very technical level, but also music enthusiasts who have their own informal vocabulary for describing music.” Competitive AI music generators must also learn from a constant supply of new songs made by people, or else their outputs will be stuck in time, sounding stale and dated. For this, today’s AI-generated music relies on human-generated art. In the future, though, AI music models may train on their own outputs, an approach being experimented with in other AI domains. Because models start with a random sampling of noise, they are nondeterministic; giving the same AI model the same prompt will result in a new song each time. That’s also because many makers of diffusion models, including Udio, inject additional randomness through the process—essentially taking the waveform generated at each step and distorting it ever so slightly in hopes of adding imperfections that serve to make the output more interesting or real. The organizers of the Dartmouth conference themselves recommended such a tactic back in 1956. According to Udio co­founder and chief operating officer Andrew Sanchez, it’s this randomness inherent in generative AI programs that comes as a shock to many people. For the past 70 years, computers have executed deterministic programs: Give the software an input and receive the same response every time.  “Many of our artists partners will be like, ‘Well, why does it do this?’” he says. “We’re like, well, we don’t really know.” The generative era requires a new mindset, even for the companies creating it: that AI programs can be messy and inscrutable. Is the result creation or simply replication of the training data? Fans of AI music told me we could ask the same question about human creativity. As we listen to music through our youth, neural mechanisms for learning are weighted by these inputs, and memories of these songs influence our creative outputs. In a recent study, Anthony Brandt, a composer and professor of music at Rice University, pointed out that both humans and large language models use past experiences to evaluate possible future scenarios and make better choices.  Indeed, much of human art, especially in music, is borrowed. This often results in litigation, with artists alleging that a song was copied or sampled without permission. Some artists suggest that diffusion models should be made more transparent, so we could know that a given song’s inspiration is three parts David Bowie and one part Lou Reed. Udio says there is ongoing research to achieve this, but right now, no one can do it reliably.  For great artists, “there is that combination of novelty and influence that is at play,” Sanchez says. “And I think that that’s something that is also at play in these technologies.” But there are lots of areas where attempts to equate human neural networks with artificial ones quickly fall apart under scrutiny. Brandt carves out one domain where he sees human creativity clearly soar above its machine-made counterparts: what he calls “amplifying the anomaly.” AI models operate in the realm of statistical sampling. They do not work by emphasizing the exceptional but, rather, by reducing errors and finding probable patterns. Humans, on the other hand, are intrigued by quirks. “Rather than being treated as oddball events or ‘one-offs,’” Brandt writes, the quirk “permeates the creative product.”  STUART BRADFORD He cites Beethoven’s decision to add a jarring off-key note in the last movement of his Symphony no. 8. “Beethoven could have left it at that,” Brandt says. “But rather than treating it as a one-off, Beethoven continues to reference this incongruous event in various ways. In doing so, the composer takes a momentary aberration and magnifies its impact.” One could look to similar anomalies in the backward loop sampling of late Beatles recordings, pitched-up vocals from Frank Ocean, or the incorporation of “found sounds,” like recordings of a crosswalk signal or a door closing, favored by artists like Charlie Puth and by Billie Eilish’s producer Finneas O’Connell.  If a creative output is indeed defined as one that’s both novel and useful, Brandt’s interpretation suggests that the machines may have us matched on the second criterion while humans reign supreme on the first.  To explore whether that is true, I spent a few days playing around with Udio’s model. It takes a minute or two to generate a 30-second sample, but if you have paid versions of the model you can generate whole songs. I decided to pick 12 genres, generate a song sample for each, and then find similar songs made by people. I built a quiz to see if people in our newsroom could spot which songs were made by AI.  The average score was 46%. And for a few genres, especially instrumental ones, listeners were wrong more often than not. When I watched people do the test in front of me, I noticed that the qualities they confidently flagged as a sign of composition by AI—a fake-sounding instrument, a weird lyric—rarely proved them right. Predictably, people did worse in genres they were less familiar with; some did okay on country or soul, but many stood no chance against jazz, classical piano, or pop. Beaty, the creativity researcher, scored 66%, while Brandt, the composer, finished at 50% (though he answered correctly on the orchestral and piano sonata tests).  Remember that the model doesn’t deserve all the credit here; these outputs could not have been created without the work of human artists whose work was in the training data. But with just a few prompts, the model generated songs that few people would pick out as machine-made. A few could easily have been played at a party without raising objections, and I found two I genuinely loved, even as a lifelong musician and generally picky music person. But sounding real is not the same thing as sounding original. The songs did not feel driven by oddities or anomalies—certainly not on the level of Beethoven’s “jump scare.” Nor did they seem to bend genres or cover great leaps between themes. In my test, people sometimes struggled to decide whether a song was AI-generated or simply bad.  How much will this matter in the end? The courts will play a role in deciding whether AI music models serve up replications or new creations—and how artists are compensated in the process—but we, as listeners, will decide their cultural value. To appreciate a song, do we need to picture a human artist behind it—someone with experience, ambitions, opinions? Is a great song no longer great if we find out it’s the product of AI?  Sanchez says people may wonder who is behind the music. But “at the end of the day, however much AI component, however much human component, it’s going to be art,” he says. “And people are going to react to it on the quality of its aesthetic merits.” In my experiment, though, I saw that the question really mattered to people—and some vehemently resisted the idea of enjoying music made by a computer model. When one of my test subjects instinctively started bobbing her head to an electro-pop song on the quiz, her face expressed doubt. It was almost as if she was trying her best to picture a human rather than a machine as the song’s composer. “Man,” she said, “I really hope this isn’t AI.”  It was. 
    0 Comentários 0 Compartilhamentos 63 Visualizações
  • WWW.BUSINESSINSIDER.COM
    I'm a dietitian on the Mediterranean diet. Here are 12 things I buy at Trader Joe's when I don't feel like cooking.
    When I tell people I've been a registered dietitian for more than 20 years, the assumption is that I love to eat nutritious foods and cook them myself.I try to mostly eat balanced and nutrient-dense meals that follow the Mediterranean diet, but I don't enjoy spending time planning, cooking, and cleaning. Thankfully, Trader Joe's has some gems that help me feed my entire family (including a picky child).Here are 15 of my must-buys to help create healthy and easy meals without spending too much time in the kitchen.Editor's Note: Product price and availability may vary. Envy apples seem to stay fresh longer. Trader Joe's produce selection varies. Lauren Manaker Envy apples can be added to salads, "girl dinners," or lunchboxes for an extra crunch, a boost of fiber, and balanced sweetness.They're appealing because their insides tend to stay whiter longer, allowing for slicing or chopping without worrying too much about being stuck with discolored fruit. I buy Norwegian farm-raised salmon for a kick of healthy fats and protein. Salmon is a pretty versatile protein. Lauren Manaker Salmon from Norway is known for its pure taste, beautiful color, and firm flesh. Much of that is due to its balanced fat content and firm texture.It's also nutrient-dense, providing essentials such as omega 3; vitamins D, B12, and A; and selenium. Plus, it's incredibly easy to cook, especially if I remember to marinate it the night before. Clif Bars are my go-to for a boost of energy. Trader Joe's tends to carry an array of Clif bars. Lauren Manaker Not loving to cook also means not loving to prep snacks. But because I live an active lifestyle, I know I need to fuel myself with nutrients such as sustainable carbs before I start a workout.Clif Bars are crafted with a blend of plant-based protein, fat, and carbohydrates. They're my go-to pre-workout snack that requires zero effort in the kitchen. The vegan kale, cashew, and basil pesto tastes good on almost everything. Trader Joe's vegan pesto is one of my favorite buys. Lauren Manaker I don't follow a vegan diet, but that doesn't stop me from purchasing Trader Joe's pesto to use on pasta dishes, sandwiches, or as a dip.This pesto is also my secret ingredient in grilled-cheese sandwiches. Trader Joe's fruits-and-greens smoothie blend makes morning smoothies a breeze. Premade blends are great for easy smoothies. Lauren Manaker The blend of frozen produce makes my smoothie-making so easy.There's no chopping or prepping required when I'm in the mood for a breakfast smoothie — I simply toss some into a blender, add milk, and turn it on.  Vegan creamy dill dressing elevates a slew of dishes. Trader Joe's has some great vegan dressings and sauces. Lauren Manaker If it were socially acceptable to drink Trader Joe's vegan creamy dill dressing with a straw, I'd do it. I love that it's free from fillers or emulsifiers, and the flavor is incredibly satisfying. The obvious way to enjoy this dressing is on top of salad. However, I also use it as a saucy addition to chicken or fish meals, an ingredient in grain-based dishes, and a condiment on sandwiches. The organic Mediterranean-style salad kit helps us eat more veggies. Trader Joe's has an array of salad kits available. Lauren Manaker Making a salad isn't extremely labor-intensive. However, opening a salad kit and dumping all of the contents into a bowl is so much easier than procuring and chopping ingredients and coming up with the perfect flavor combo. Trader Joe's Mediterranean salad kit is packed with veggies and a corresponding dressing packet. I love pairing it with protein and starch for a balanced and healthy meal. Trader Joe's prepackaged veggie mixes come in handy. Some mixes have asparagus and mushrooms. Lauren Manaker Veggies are a must at dinnertime in my house. Having prewashed and cut veggie and produce kits, such as the Trader Joe's asparagus sauté, makes cooking dinner a breeze.Simply open the package and sauté everything in some extra-virgin olive oil.  The bulgur pilaf with butternut squash and feta cheese is an easy side dish. Trader Joe's frozen grains can be easy to cook. Lauren Manaker Whole grains can be both nutritious and filling. For those who don't like spending too much time in the kitchen, cooking them can be a tedious task.Precooked frozen grains, such as Trader Joe's bulgur pilaf, help save a ton of time because they just need to be heated through. Plus, this one is made with butternut squash, and I like the boost of veggies in every bite. Riced cauliflower stir-fry is a great base for a low-carb meal. Trader Joe's riced cauliflower stir-fry is fairly low-carb. Lauren Manaker Trader Joe's precooked cauliflower rice is a perfect base for a low-carb meal. I just add a protein for a complete dish. For people who don't love cauliflower rice — but tolerate it because they want to include more veggies in their diet (such as my husband) — mixing this dish with some regular rice can offer the best of both worlds. Hard-boiled eggs are a secret shortcut in my kitchen. I grab already cooked and peeled hard-boiled eggs when I find them. Lauren Manaker Precooked and shelled hard-boiled eggs make for an easy breakfast protein, salad topping, or sandwich addition. Trader Joe's Tarte au Brie et aux Tomates is my solution for pizza night. When I can find it, I grab Trader Joe's Tarte au Brie et aux Tomates. Lauren Manaker Yes, even dietitians want to have pizza night once in a while.Trader Joe's frozen Tarte au Brie et aux Tomates satisfies the fiercest pizza craving. Plus, heating it up takes less time than we'd spend waiting for a pie to be delivered from the local pizzeria.I enjoy one serving along with a side salad for a full meal. Click to keep reading Trader Joe's diaries like this one.This story was originally published on September 25, 2023, and most recently updated on April 16, 2025.
    0 Comentários 0 Compartilhamentos 60 Visualizações
  • WWW.DAILYSTAR.CO.UK
    Overwatch 2 boss Aaron Keller on Stadium mode and huge new change to gameplay
    We spoke with Aaron Keller about Overwatch 2's switch to third-person for its new Stadium mode, and it'd be fair to say it was not as easy as simply switching camerasTech14:48, 16 Apr 2025Third-person perspective is new for Overwatch(Image: Blizzard)It feels as though 2025 is the year to forget all you know about Overwatch 2.Article continues belowThe game has added a return to 6v6 gameplay, while Blizzard took the decision to reveal multiple characters coming to the game earlier this year as well as in-game perks — and there are rumblings of a Netflix project.‌Stadium, however, perhaps marks the biggest departure since the sequel launched, promising revised maps, a new best-of-seven format, and big changes to the core Overwatch experience like switching to a Marvel Rivals-like third-person camera.If you thought that was easy, though, think again — ahead of Stadium's launch next week, Daily Star caught up with Game Director Aaron Keller at a roundtable interview this week to discuss the challenges of implementing third-person.Article continues below"People's movement abilities are amplified so just being able to track what enemies are doing is harder [in Stadium]," Keller explains."Pulling the camera out to a third person perspective allows us to be able to do that, and there are a lot of abilities too that might be a little bit more difficult to see in first person."There's a bit more lava on the ground and there are abilities like Zarya's bubbles that can do pulse damage that you can put onto teammates, and so it's a lot easier for them to recognize what's on them in third person," he adds..‌While Overwatch has had some instances of third-person, this is the first time it's for longer than a single ability(Image: Blizzard)"We had to do a lot of animation work for every hero. It wasn't just about positioning the camera. A lot of times it was positioning the hero, rotating the hero specifically by animation, and there was even a lot of custom animation for abilities and firing with some Heroes you couldn't see when they could 'Quick Melee'.‌"There's even work going into letting players know when they're reloading, so we needed to do UI elements for that, and there's even a lot of VGX things that we've been doing to make things read correctly."Thankfully, Keller feels the team's efforts have been rewarded."Overall I feel like we made a really great version of it [third-person Overwatch], and it's something that we're continuing to put time and energy into even for releases past Stadium's launch"Article continues belowFor the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.‌‌‌
    0 Comentários 0 Compartilhamentos 48 Visualizações
  • METRO.CO.UK
    Silent Hill meets Dead Space in Bloober’s Cronos: The New Dawn trailer
    A time travel nightmare (Bloober Team) The developers behind the acclaimed Silent Hill 2 remake have released a new trailer for their next game, and it looks suitably horrifying. The expectations around developer Bloober Team have shifted drastically following last year’s excellent Silent Hill 2, which managed to live up to the original in every way. The team is hoping to carry that momentum into its next game, which is anoriginal title named Cronos: The New Dawn. The sci-fi survival horror was announced in October last year, but now a gameplay trailer has arrived showcasing its Dead Space-inspired DNA. Unlike Silent Hill, Cronos: The New Dawn looks to be more action-focused, with the footage showing a variety of weapons, including laser tripwires, heavy shotguns, and a burst rifle of some kind. The creepy monsters are highly reminiscent of Dead Space, with lots of gangly limbs, although dismemberment isn’t a key mechanic. Instead, Cronos features a ‘merge’ mechanic, where you have to burn the bodies of fallen monsters otherwise they can be absorbed into existing ones, making them stronger in the process. A synopsis reads: ‘In a grim world where Eastern European brutalism meets retro-futurist technology, you play as a Traveler tasked with scouring the wastelands of the future in search of time rifts that will transport you back to 1980s era Poland. More Trending ‘In the past, you will witness a world in the throe of The Change, a cataclysmic event that forever altered humanity. The future, meanwhile, is a ravaged wasteland overrun with nightmarish abominations.’ Developer Bloober Team is based in Krakow, Poland, so it’ll be interesting to see how the location feeds into the experience overall. Before Silent Hill 2, Bloober Team was known for games with a more mixed reputation, such as Layers Of Fear, The Medium, and 2019’s Blair Witch. If they can maintain the quality of Silent Hill 2 with Chronos though, that will be quite a turnaround for the studio. Cronos: The New Dawn is set to be released on Xbox Series X/S, PlayStation 5, and PC in 2025, with a specific release date yet to be announced. It’s set to launch in 2025 (Bloober Team) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comentários 0 Compartilhamentos 56 Visualizações