• APPLEINSIDER.COM
    AAPL crumble: stock hit again, as White House clarifies 145% China tariff rate
    After a brief respite on Wednesday, Apple's stock restarted its downward trajectory triggered by President Trump's accelerating tariff battle with China.Apple's shares have had a bad time since the tariff war began.On Wednesday, Apple ended the day at $198.85, up 15.3% from Tuesday's closing level, after Trump announced a pause on tariffs. Just one day later, Thursday's end of trading saw the Apple stock return to its previous downward trend.At the start of the session, early trading brought Apple's price down to 189.06, but that hit was short-lived as it shot up to $194.78 within an hour. At the time of closure, however, Apple's shares reached $190.42, down 4.24% from the start of the day. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare 0 Anteile 104 Ansichten
  • ARCHINECT.COM
    USA Pavilion unveiled ahead of Expo 2025 in Osaka, Japan
    The USA Pavilion at Expo 2025 in Osaka, Japan has been completed. The project was delivered by a team consisting of New Orleans-based architecture studio Trahan Architects, entertainment agency BRC Imagination Arts, and design-build contractors Alchemy and ES Global. Image © Hufton+CrowThis is Trahan Architects’ first built project outside of the United States, and it is the fifth time BRC Imagination Arts has created the USA Pavilion at an Expo. Under the theme “Designing Future Society for Our Lives”, Expo 2025, which opens on April 13th, revolves around working towards the achievement of the United Nations’ Sustainable Development Goals and realizing “Society 5.0”, Japan’s national strategy to develop a society that brings prosperity by combining both cyber and physical spaces. Image © Hufton+CrowThe USA Pavilion is located within Expo 2025’s Grand Ring, sitting halfway between the Forest of Tranquility and the East Gate Entrance Plaza. Described as open, grand, yet minimalistic,...
    0 Kommentare 0 Anteile 99 Ansichten
  • GAMINGBOLT.COM
    The Molasses Flood Has Been Fully Absorbed Into CD Projekt RED
    CD Projekt RED has announced that it has now fully absorbed The Molasses Flood into itself. This means that The Molasses Flood now no longer exists as a separate legal entity. The announcement was made through a statement on the official website of The Molasses Flood (via Eurogamer), which reveals that the merger went into full affect on April 1. “We want to let you know that on April 1, 2025, The Molasses Flood LLC (“TMF”) merged with CD PROJEKT RED Inc. (“CDPR Inc.”), a company being a part of the CD PROJEKT Group,” wrote The Molasses Flood in its statement. “As a result of the merger TMF, in its former legal state (of a separate legal entity) ceased its operations, while CDPR Inc. assumed the rights and obligations of TMF.” The studio did say that, while it is no longer an individual legal entity, its work on The Flame in the Flood and Drake Hollow will not be affected by this move. On the completion of development, CD Projekt will continue to publish the games. “The merger will not affect the availability or distribution of ‘The Flame in The Flood‘ and ‘Drake Hollow‘ video games, which will continue to be published by CD PROJEKT Group,” it wrote. The Molasses Flood co-founder Damian Isla took to LinkedIn to talk about the merger in more detail. In a post on the social media website, Isla referred to the move as being “the end of an era.” He also referred to the merger as being a good thing for the studio, crediting it with helping break down organisational barriers. “To be ultra clear: this is a GOOD AND HEALTHY thing for the studio, and it was long-expected. It breaks down some organizational barriers, and better integrates the TMF team with the rest of the amazing CDPR org,” wrote Isla. “Overall, it shows a very bright future for Project Sirius (aka “the multiplayer Witcher game,” of which I was the Design Director for three years). It’s going to be an amazing game, one for the books, and I cannot wait until the rest of the world learns about what we’ve been working on.” However, Isla does note that he has decided to “not follow TMF on this transition.” He has stated that March 31 was his last day with the studio. “Obviously this is a big change,” he wrote. “TMF has been a huge part of my life for the past decade plus, and I’m proud of have been a part of building something that lasted as long as TMF did, in a market environment often quite hostile to tiny teams like ours. I think we made some beautiful, memorable and quirky things, and I’ve also gotten to work with some of the kindest, most generous and talented people in the industry.” CD Projekt had acquired The Molasses Flood all the way back in October 2021. Since then, The Molasses Flood has been working on its own projects, as well as Project Sirius, which is going to be a multiplayer game based on The Witcher franchise.
    0 Kommentare 0 Anteile 67 Ansichten
  • WWW.CGCHANNEL.COM
    F12 releases The Grove 2.2 for Blender and Houdini
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" F12 – aka developer Wybren van Keulen – has released The Grove 2.2, the latest version of the software for generating biologically plausible tree models for use in VFX, animation and games.The update adds a new Skeleton tool for generating bone-based animation control rigs for trees, and new workflows for reducing their poly counts to levels suitable for real-time use. Mimic the growth forms of real trees The Grove takes a parametric approach to generating trees, with controls that mimic the factors determining the forms of real plants, resulting in more realistic-looking models.Once the overall form has been set, The Grove fills in details using ‘Twigs’: instanced geometry representing not only actual twigs, but leaves, flowers and fruit, sold separately to the core app. The resulting textured geometry can be exported from the user’s host software in standard file formats, including FBX and OBJ, for use in other DCC applications. Users can also generate wind and growth animations, exportable in Alembic format. Since The Grove 2.0, the software – originally a Blender plugin – has become a standalone application, with integrations for Blender and Houdini. https://www.cgchannel.com/wp-content/uploads/2025/04/250410_TheGrove22_Skeleton_sm.mp4 New Skeleton tool generates and refines animation control rigs Key changes in The Grove 2.2 include the new Skeleton tool for automatically generating bone-based animation rigs and their accompanying deformation weights.It starts by generating a dense bone network, with users adjusting control parameters to reduce the bone count to a more production-friendly level, with the skeleton updating in real time. Once generated, the skeleton can be used to pose the tree manually, or to animate it: either to apply wind sway, or to animate collisions with other objects like vehicles. The rig can be simplified enough for animations to run at interactive speeds, and can be used in game engines as well as offline rendering, although there is currently no direct export pipeline. The Skeleton tool is available in the higher-end Indie and Studio editions of The Grove. https://www.cgchannel.com/wp-content/uploads/2025/04/250410_TheGrove22_Games.mp4 A tree reduced to 42k triangles and 49 bones for real-time use. New workflows for reducing the poly count of trees for use in game engines The Grove 2.2 also features a number of optimizations intended to help reduce the poly count of the trees generated to levels suitable for use as LOD assets in games.They’re as much suggested workflows as they are changes to the software itself, so check out the online release notes for more details. Other performance increases include moving more of the code used to draw viewport previews of growth simulations from Python to the software core, making growth cycles “50% faster”. Other features and workflow improvements Other changes include a new Sow toolset, which mimics trees spreading by seed, generating seedlings and saplings around the base of a parent tree.You can find a list of smaller improvements via the link at the foot of the story. Pricing and availability The Grove 2.2 is compatible with Blender 4.2+ and Houdini 19.5 on Windows, Linux and macOS. The software comes in three editions. All of them include the Blender plugin, but only the Studio edition includes the Houdini plugin. The Starter edition has a MSRP of €99, up €10 since the previous release. The Indie edition has a MSRP of €199, up €50. The Studio edition has a MSRP of €799, up €79. Individual Twigs cost €9.69. Read a full list of new features in The Grove 2.2 on the product website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Kommentare 0 Anteile 138 Ansichten
  • WWW.SMITHSONIANMAG.COM
    In a World First, Researchers Mapped Part of a Mouse's Brain in Incredible Detail. It's a Leap Forward for Neuroscience
    In a World First, Researchers Mapped Part of a Mouse’s Brain in Incredible Detail. It’s a Leap Forward for Neuroscience The 3D brain map includes more than 200,000 cells, 523 million synapses and over two miles of axons, representing the most detailed wiring diagram of a piece of mammal brain ever constructed A subset of more than 1,000 neurons, representing just a snapshot of the complexity mapped within a cubic millimeter of mouse brain tissue Allen Institute In 1979, biologist Francis Crick claimed it would be impossible to create an accurate diagram of the brain’s wiring and neuronal activity—even within just a cubic millimeter of brain tissue. Now, a team of more than 150 researchers has proven him wrong by mapping a tiny portion of a mouse’s brain. As detailed in a collection of ten studies published Wednesday in Nature journals, the team of interdisciplinary scientists participating in the Machine Intelligence from Cortical Networks (MICrONS) project have mapped the wiring and visual functions of a piece of mouse brain roughly the size of a grain of sand. This monumental effort represents the most detailed wiring diagram of a mammal brain ever, and it holds important implications for studying brain disorders in humans. “It definitely inspires a sense of awe, just like looking at pictures of the galaxies,” Forrest Collman, a neuroscientist from the Allen Institute and one of the project’s lead researchers, tells the Associated Press’ Lauran Neergaard. “You get a sense of how complicated you are.” The map charts 200,000 cells, 523 million synapses (the connections between neurons) and more than two miles of axons (the part of a neuron that passes on the electrical impulses). All that data adds up to 1.6 petabytes, which is equivalent to about 22 years of continuous, high-definition video, according to a National Institutes of Health statement. “Imagine a kind of Google Maps for the brain, not just showing the major highways, but every small street, every house, every room inside each house and even every door and window,” Collman tells the London Times’ Rhys Blakely. “Just like people use Google Maps to figure out the best route from point A to point B, or even to check if a route exists at all, this kind of detailed brain map lets scientists see whether two neurons are connected and exactly where those connections occur.” Revealing the largest wiring diagram and functional map of the brain Watch on To create the brain map, researchers worked with a lab mouse genetically engineered to make its neurons light up when they fire an electrical signal. They then recorded the brain activity in its visual cortex—the region of its brain associated with vision—as the mouse watched YouTube videos and movie clips, including scenes from Mad Max: Fury Road, The Matrix and the Qatsi experimental documentary trilogy. The researchers then extracted a cubic millimeter of brain tissue and sliced it into roughly 28,000 layers—each about 400 times thinner than a human hair. They photographed each layer, used artificial intelligence to process the images into a digital 3D diagram and combined it with the previously recorded brain activity patterns associated with vision. Because they had studied how the mouse’s neurons lit up as it watched videos, the team could compare the neurons’ mapped structure with their functions and piece together how the connections between them work. While scientists had previously studied brain cells’ structure and function separately, “understanding how neuronal function emerges at the circuit level has been challenging, since we need to study both function and wiring in the same neurons,” Andreas Tolias, a neuroscientist at Baylor College of Medicine and one of the lead researchers, tells Reuters’ Will Dunham. “Our study represents the largest effort to date to systematically unify brain structure and function within a single individual mouse,” he adds. The brain diagram unveiled new cell types, characteristics, relationships and rules of organization and function—and that’s just the start. Researchers also discovered previously unknown complexity in inhibitory cells, or cells that repress brain activity. These cells, they found, are highly selective, targeting specific sets of neurons. Such detailed insight into the brain’s function and structure carries important implications for understanding cognition, as well as how shifts in this wiring might be related to disorders such as Alzheimer’s, autism, Parkinson’s and schizophrenia. “If you have a broken radio and you have the circuit diagram, you’ll be in a better position to fix it,” Nuno da Costa, a biologist at the Allen Institute and one of the project leaders, says in a statement. “In the future, we can use this to compare the brain wiring in a healthy mouse to the brain wiring in a model of disease.” Mouse brains are similar enough to human brains that some of the things we learn from studying their neural circuitry could apply to our own, as Sebastian Seung, a Princeton University neuroscientist involved with the MICrONS project, tells the New York Times’ Carl Zimmer. This might help researchers discover more targeted medications to minimize side effects when treating psychological disorders, he adds. Davi Bock, a neuroscientist from the University of Vermont who was not involved in the project, calls the brain map a “milestone” to the New York Times. He’s associated with another project that last year unveiled the first complete map of an adult fruit fly brain. Bock adds that the scientific advancements that led to this success bring scientists much closer to achieving the next goal: mapping an entire mouse brain. “It’s totally doable, and I think it’s worth doing,” he says. Get the latest stories in your inbox every weekday.
    0 Kommentare 0 Anteile 109 Ansichten
  • VENTUREBEAT.COM
    NTT launches physics of AI group and AI inference chip design for 4K video
    NTT Research announced at an event that it has started a new AI basic research group, dubbed the Physics of Artificial Intelligence Group.Read More
    0 Kommentare 0 Anteile 65 Ansichten
  • WWW.GAMESINDUSTRY.BIZ
    Bloodborne producer establishes Sirius Studio
    Bloodborne producer establishes Sirius Studio Tokyo studio will focus on console and VR/XR titles "that make users' hearts shine" Image credit: Sirius Studios News by Vikki Blake Contributor Published on April 10, 2025 Bloodborne producer Teruyuki Toriyama has banded with former Thirdverse colleagues Tomohiro Suzuki and Hideki Irie to establish "world class development studio," Sirius Studio. As reported by Famitsu (thanks, VGC), Sirius - a subsidiary of Gz Group - will "prioritize creator independence" after all three developer left prior employer, Thirdvers, after its focus shifted to more casual games. "We were part of a team that produced high-end games at the Japan studio of a company called Thirdverse," explained Irie, via automated translation. "However, Thirdverse has recently decided to shift its business to producing casual games. "Since we were a team that came together to produce high-end games, we had the intention of continuing to make high-end games, so we directly consulted with [CEO of Thirdverse] Hironao Kunimitsu and made the transfer amicably." The Tokyo-based studio will reportedly focus on console and VR/XR titles, developing both new and third-party IPs "that make users' hearts shine."
    0 Kommentare 0 Anteile 125 Ansichten
  • WWW.GAMEDEVELOPER.COM
    SAG-AFTRA receives bipartisan congressional support for anti-deepfake and AI act
    The SAG-AFTRA Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) act has been reintroduced in the senate, and received bi-partisan support.During a press conference held on April 9, the union said that, if passed, the bill would establish “a federal right in voice and likeness to protect against unauthorized use of digital replicas.”The bill, sponsored by senators Marsha Blackburn, Chris Coons, Amy Klobuchar, and Thom Tillis, would apply to both audiovisual works and sound recordings. It was originally introduced in 2024, but it didn’t pass before the elections."The NO FAKES Act isn’t just about protecting actors, recording artists and broadcasters,” SAG-AFTRA president Fran Drescher said in the press conference. “Deepfakes can ruin all lives. It doesn’t matter if you’re a public figure or a high school student being exploited by internet creeps. It’s time to give humans the power to say NO, not my face, not my voice!”National executive director and chief negotiator Duncan Crabtree-Ireland explained that the bill would allow SAG-AFTRA members and workers who rely on their face and voice for their livelihood to demand that platforms “remove illegal voice and image clones.” It would also grant a legal path into seeking damages from “those who intentionally cause harm.”Related:“As innovation continues to rapidly evolve, it’s time for commonsense legislation that defends individual rights,” Crabtree-Ireland said.The main exceptions to the bill would be a digital replica that is used in “bona fide commentary, criticism, scholarship, satire, or parody.” SAG-AFTRA mentioned that the NO FAKES act would preserve existing protections at the state level. This includes Tennessee’s landmark SAG-AFTRA-supported Ensuring Likeness Voice and Image Security (ELVIS) act, alongside California’s SAG-AFTRA-sponsored AB2602. As such, NO FAKES would provide “one strong, consent-based framework for digital replica uses in expressive works nationwide.”A brief history of SAG-AFTRA's strikeThe union is entering its second year of striking studios under the Interactive Media Bargaining agreement. The agreement has been signed by over 180 studios, and provides protections against AI voice usage. While it continues to urge more studios to sign the agreement, it has also called out “alarming loopholes” in AI proposals from major game studios.AI continues to be at the center of game industry news. During the past month alone, Activision Blizzard used generative AI to test interest in games that never existed in the first place. Castle of Secrets developer Serene Questworks allegedly replaced its voice cast with generative AI. Sony made use of the technology as well to turn Horizon series’ protagonist Aloy into an unsettling digital animatronic.Related:At GDC 2025, former EA software engineer and independent senior AI programmer David “Rez’ Graham expressed worries about “the death of art” surrounding the use of generative AI."I hope this is hyperbole,” Graham said “I hope in five years people are laughing at me. [...] I hope that's what happens. But you can't deny there is some path that ends with this. With everything just being this recycled shoveled garbage. The race to the cheapest show. To the cheapest game. Because the people who are controlling the top corporations, that's all they give a shit about."
    0 Kommentare 0 Anteile 102 Ansichten
  • WWW.THEVERGE.COM
    NHTSA staffers evaluating the risks of self-driving cars were reportedly fired by DOGE
    Elon Musk’s Department of Government Efficiency (DOGE) fired about 30 members of the National Highway and Traffic Safety Administration (NHTSA) in February, and many of them were part of a department that assesses the risks of self-driving cars, according to the Financial Times. One worker laid off from the NHTSA’s so-called “office of vehicle automation safety” told the FT that DOGE’s actions could “weaken NHTSA’s ability to understand self-driving technologies.” Another worker said it’d be “ironic” if the firings would slow down Tesla’s plans for autonomous vehicles.  Tesla is under multiple investigations from the NHTSA over its automated features, including its Full Self-Driving software and remote summon feature. Tesla’s FSD and Autopilot driver assistant systems have more reported crashes on the road than any other company. Families of victims who died in Tesla crashes have urged Transportation Secretary Sean Duffy to protect Biden-era rules to report automated vehicle crashes, fearing Musk’s involvement in the Trump Administration could influence investigations. The firings also came just months after the NHTSA released a new framework that could ease regulation on self-driving cars, but in exchange, companies would need to share more data with the regulator.
    0 Kommentare 0 Anteile 70 Ansichten
  • WWW.MARKTECHPOST.COM
    OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web
    Despite advances in large language models (LLMs), AI agents still face notable limitations when navigating the open web to retrieve complex information. While many models excel on static knowledge benchmarks, they often underperform when tasked with locating nuanced, context-dependent facts across multiple sources. Most existing benchmarks evaluate a model’s recall of easily accessible knowledge, which does not reflect the intricacy of real-world browsing tasks. In contrast, agents operating in applied settings—whether assisting with research, summarizing policy, or fact-checking claims—require persistence, structured reasoning, and the ability to dynamically adapt their search strategies. These capabilities remain underdeveloped in current AI systems. OpenAI Open Sources BrowseComp: A Benchmark of 1,266 Information-Seeking Tasks To better evaluate these capabilities, OpenAI has released BrowseComp, a benchmark designed to assess agents’ ability to persistently browse the web and retrieve hard-to-find information. The benchmark includes 1,266 fact-seeking problems, each with a short, unambiguous answer. Solving these tasks often requires navigating through multiple webpages, reconciling diverse information, and filtering relevant signals from noise. The benchmark is inspired by the notion that just as programming competitions serve as focused tests for coding agents, BrowseComp offers a similarly constrained yet revealing evaluation of web-browsing agents. It deliberately avoids tasks with ambiguous user goals or long-form outputs, focusing instead on the core competencies of precision, reasoning, and endurance. BrowseComp is created using a reverse-question design methodology: beginning with a specific, verifiable fact, they constructed a question designed to obscure the answer through complexity and constraint. Human trainers ensured that questions could not be solved via superficial search and would challenge both retrieval and reasoning capabilities. Additionally, questions were vetted to ensure they would not be easily solvable by GPT-4, OpenAI o1, or earlier browsing-enabled models. The dataset spans a broad range of domains—including science, history, arts, sports, and entertainment—and is balanced to promote topic diversity. Each task is formulated so that the correct answer is a short string, which simplifies evaluation and reduces ambiguity. Human performance was also assessed, with human trainers given two hours per task; most failed to solve the majority of tasks, reflecting their difficulty. Model Evaluation and Findings OpenAI evaluated several models on BrowseComp, including GPT-4o (with and without browsing), GPT-4.5, OpenAI o1, and Deep Research—a model specifically trained to handle persistent browsing tasks. The results indicate that models without advanced search or reasoning strategies perform poorly: GPT-4o without browsing achieved 0.6% accuracy, and with browsing enabled, only 1.9%. GPT-4.5 scored similarly low. OpenAI o1, with improved reasoning but no browsing, performed moderately better at 9.9%. Deep Research outperformed all other models, achieving 51.5% accuracy. Its architecture and training emphasize iterative searching, evidence synthesis, and adaptive navigation. Performance improved further with multiple trials per question and aggregation strategies such as best-of-N selection and confidence-based voting. While Deep Research exhibited higher calibration error—frequently being overconfident in incorrect answers—it often identified its own correct outputs with internal consistency, suggesting a usable confidence signal. Human Performance and Task Difficulty Human trainers attempted to solve the benchmark problems without the assistance of AI tools. Of the 1,255 attempted tasks, 71% were marked as unsolvable within the two-hour window, and only 29% were successfully completed. Among those, the agreement rate with the reference answer was 86.4%. These outcomes underscore the complexity of the benchmark and suggest that current AI models still fall short of the adaptability and background reasoning skills needed for such tasks. Conclusion BrowseComp introduces a focused, verifiable, and technically demanding benchmark for evaluating the core capabilities of web-browsing agents. By shifting emphasis from static recall to dynamic retrieval and multi-hop reasoning, it presents a realistic challenge that aligns closely with emerging real-world applications. Although current models, including those with browsing capabilities, perform unevenly, the Deep Research agent illustrates the potential of dedicated architectures to bridge this gap. BrowseComp is publicly available via GitHub and detailed on OpenAI’s official blog. Check out the Paper here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit. The post OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web appeared first on MarkTechPost.
    0 Kommentare 0 Anteile 90 Ansichten