• What happened is so bad
    What happened is so bad 😒😅
    0 Commentaires ·0 Parts ·312 Vue ·6
  • Razer's New Remote Play Tech Will Soon Let You Stream Your PC Library To Your Phone
    www.gamespot.com
    Razer has shown off a long list of new hardware and apps at CES 2025, and one of the coolest is the new PC Remote Play feature coming to its free Razer Nexus mobile app. As the name implies, the new feature lets users stream gameplay directly from their PC to various mobile devices, including Android phones, iOS devices, and Razer's Edge gaming tablet.According to Razer's press release, the PC Remote Play feature is a free update for the Nexus App. You can scroll through a list of all the supported games installed on your PC and launch and play them directly in the Nexus app while on the go. Razer claims the feature will allow for "full graphical fidelity" while streaming, and it uses Razer Sensa HD Haptics to add rumble and other tactile feedback while playing. Razer is demoing the PC Remote Play feature at its booth for CES attendees, and the beta is expected to roll out to the Razer Nexus app soon.Razer PC Remote Play on Kishi UltraIn the meantime, it's worth upgrading your mobile cloud gaming setup with the Razer Kishi Ultra controller to take advantage of the new PC streaming feature. You can currently grab the controller for just $130 at Amazon (was $150). The device works with select Android phones, tablets, and Apple devices with a USB-C port, including the iPhone 15, iPhone 16, and iPad Mini a17 Pro. While the Kishi Ultra offers the most feature support, you'll be able to use the feature with other Razer mobile controllers, including the Kishi V2 and V2 Pro.Continue Reading at GameSpot
    0 Commentaires ·0 Parts ·80 Vue
  • Sony's "Immersive Entertainment Concept" Could Potentially Let You Smell Zombies
    www.gamespot.com
    With CES 2025 in full swing, one of the more intriguing exhibits at the trade show is a proof-of-concept entertainment experience from Sony. The PlayStation company is currently showing off what a future gaming room could look and smell like, as it's demoing what is essentially the closest thing possible to a Star Trek holodeck. According to Sony, the Future Immersive Entertainment Concept combines Crystal LED panels, audio, haptics, scent, and atmospherics with a PlayStation game like The Last of Us to create an interactive experience. You could compare this to a 4D version of Duck Hunt but with people horribly mutated by a fictional cordyceps fungus instead of mallards, and we don't even want to know what rotting mushroom-monsters smell like in a world bereft of personal hygiene products.Visually, it's a convincing step into that bleak world thanks to the realistic graphics. This isn't the first time that Sony has revealed conceptual gaming hardware, as back in May 2024, the company showed off a high-tech PlayStation controller in a promotional video, teasing a radical departure from the standard DualSense device.Continue Reading at GameSpot
    0 Commentaires ·0 Parts ·85 Vue
  • New Virtua Fighter Gets Unexpected First Look, But It's Not Gameplay
    www.gamespot.com
    Sega and Ryu Ga Gotoku have released footage of the upcoming Virtua Fighter project, which was first shown at CES 2025. However, they made it clear that the footage is not actual gameplay, and that it's just "in-game" concept footage before development began.The quick 35-second video presents one stage that looks like a run-down city simply called The City, which is described as a "hotbed of vice, attracts both the wicked and desperate." The camera then pans out to show two characters fighting each other. Both of them are models of Akira, Virtua Fighter's mascot.A HUD with HP bars suddenly appear at the top and the two characters trade blows with each other before one of them kicks the other into a pile of boxes in a cinematic cutscene, ending the footage.Continue Reading at GameSpot
    0 Commentaires ·0 Parts ·90 Vue
  • Hyper Light Breaker Devs Detail the Story and Universe of the Franchise
    gamerant.com
    Hyper Light Breaker's Early Access release is nearly here, providing a sequel to indie darling Hyper Light Drifter after nearly a decade. Hyper Light Breaker will differ significantly from its predecessor, evolving not only aesthetic and gameplay aspects, but story and worldbuilding elements as well.
    0 Commentaires ·0 Parts ·88 Vue
  • Why Haunted Chocolatier's Romance Should Be Opposite to Stardew Valley's
    gamerant.com
    While romance is not everything in ConcernedApe's Stardew Valley, it's at least a significant contributor to the game's heart and soul, effectively encompassing a sizable portion of the "life" part of the farm life sim genre it calls home. Not only is it a great way to get to know many of Stardew Valley's NPCs on a deeper level, it's an opportunity for players to have the comfort of someone waiting at home for them after a long day's work, and even someone who helps out around the farm. In light of that, it's good news that ConcernedApe's upcoming chocolate confectionery store sim, Haunted Chocolatier, includes romance in its gameplay as well.
    0 Commentaires ·0 Parts ·88 Vue
  • Will One Piece End In 2025?
    gamerant.com
    Despite only being around for around 30 years now, One Piece feels like its been around forever. With thousands of chapters and episodes worth of story to tell, One Piece feels like it could run forever. However, all stories must come to an end, and despite the scope of the series, One Piece is no different.
    0 Commentaires ·0 Parts ·86 Vue
  • Why World Foundation Models Will Be Key to Advancing Physical AI
    blogs.nvidia.com
    In the fast-evolving landscape of AI, its becoming increasingly important to develop models that can accurately simulate and predict outcomes in physical, real-world environments to enable the next generation of physical AI systems.Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, joined the NVIDIA AI Podcast to discuss the significance of world foundation models (WFM) powerful neural networks that can simulate physical environments. WFMs can generate detailed videos from text or image input data and predict how a scene evolves by combining its current state (image or video) with actions (such as prompts or control signals).World foundation models are important to physical AI developers, said Liu. They can imagine many different environments and can simulate the future, so we can make good decisions based on this simulation.This is particularly valuable for physical AI systems, such as robots and self-driving cars, which must interact safely and efficiently with the real world.The AI Podcast NVIDIAs Ming-Yu Liu on How World Foundation Models Will Advance Physical AI Episode 240Why Are World Foundation Models Important?Building world models often requires vast amounts of data, which can be difficult and expensive to collect. WFMs can generate synthetic data, providing a rich, varied dataset that enhances the training process.In addition, training and testing physical AI systems in the real world can be resource-intensive. WFMs provide virtual, 3D environments where developers can simulate and test these systems in a controlled setting without the risks and costs associated with real-world trials.Open Access to World Foundation ModelsAt the CES trade show, NVIDIA announced NVIDIA Cosmos, a platform of generative WFMs that accelerate the development of physical AI systems such as robots and self-driving cars.The platform is designed to be open and accessible, and includes pretrained WFMs based on diffusion and auto-regressive architectures, along with tokenizers that can compress videos into tokens for transformer models.Liu explained that with these open models, enterprises and developers have all the ingredients they need to build large-scale models. The open platform also provides teams with the flexibility to explore various options for training and fine-tuning models, or build their own based on specific needs.Enhancing AI Workflows Across IndustriesWFMs are expected to enhance AI workflows and development in various industries. Liu sees particularly significant impacts in two areas:The self-driving car industry and the humanoid [robot] industry will benefit a lot from world model development, said Liu. [WFMs] can simulate different environments that will be difficult to have in the real world, to make sure the agent behaves respectively.For self-driving cars, these models can simulate environments that allow for comprehensive testing and optimization. For example, a self-driving car can be tested in various simulated weather conditions and traffic scenarios to help ensure it performs safely and efficiently before deployment on roads.In robotics, WFMs can simulate and verify the behavior of robotic systems in different environments to make sure they perform tasks safely and efficiently before deployment.NVIDIA is collaborating with companies like 1X, Huobi and XPENG to help address challenges in physical AI development and advance their systems.We are still in the infancy of world foundation model development its useful, but we need to make it more useful, Liu said. We also need to study how to best integrate these world models into the physical AI systems in a way that can really benefit them.Listen to the podcast with Ming-Yu Liu, or read the transcript.Learn more about NVIDIA Cosmos and the latest announcements in generative AI and robotics by watching the CES opening keynote by NVIDIA founder and CEO Jensen Huang, as well as joining NVIDIA sessions at the show.
    0 Commentaires ·0 Parts ·101 Vue
  • Why Enterprises Need AI Query Engines to Fuel Agentic AI
    blogs.nvidia.com
    Data is the fuel of AI applications, but the magnitude and scale of enterprise data often make it too expensive and time-consuming to use effectively.According to IDCs Global DataSphere1, enterprises will generate 317 zettabytes of data annually by 2028 including the creation of 29 zettabytes of unique data of which 78% will be unstructured data and 44% of that will be audio and video. Because of the extremely high volume and various data types, most generative AI applications use a fraction of the total amount of data being stored and generated.For enterprises to thrive in the AI era, they must find a way to make use of all of their data. This isnt possible using traditional computing and data processing techniques. Instead, enterprises need an AI query engine.What Is an AI Query Engine?Simply, an AI query engine is a system that connects AI applications, or AI agents, to data. Its a critical component of agentic AI, as it serves as a bridge between an organizations knowledge base and AI-powered applications, enabling more accurate, context-aware responses.AI agents form the basis of an AI query engine, where they can gather information and do work to assist human employees. An AI agent will gather information from many data sources, plan, reason and take action. AI agents can communicate with users, or they can work in the background, where human feedback and interaction will always be available.In practice, an AI query engine is a sophisticated system that efficiently processes large amounts of data, extracts and stores knowledge, and performs semantic search on that knowledge, which can be quickly retrieved and used by AI.An AI query engine processes, stores and retrieves data connecting AI agents to insights.AI Query Engines Unlock Intelligence in Unstructured DataAn enterprises AI query engine will have access to knowledge stored in many different formats, but being able to extract intelligence from unstructured data is one of the most significant advancements it enables.To generate insights, traditional query engines rely on structured queries and data sources, such as relational databases. Users must formulate precise queries using languages like SQL, and results are limited to predefined data formats.In contrast, AI query engines can process structured, semi-structured and unstructured data. Common unstructured data formats are PDFs, log files, images and video, and are stored on object stores, file servers and parallel file systems. AI agents communicate with users and with each other using natural language. This enables them to interpret user intent, even when its ambiguous, by accessing diverse data sources. These agents can deliver results in a conversational format, so that users can interpret results.This capability makes it possible to derive more insights and intelligence from any type of data not just data that fits neatly into rows and columns.For example, companies like DataStax and NetApp are building AI data platforms that enable their customers to have an AI query engine for their next-generation applications.Key Features of AI Query EnginesAI query engines possess several crucial capabilities:Diverse data handling: AI query engines can access and process various data types, including structured, semi-structured and unstructured data from multiple sources, including text, PDF, image, video and specialty data types.Scalability: AI query engines can efficiently handle petabyte-scale data, making all enterprise knowledge available to AI applications quickly.Accurate retrieval: AI query engines provide high-accuracy, high-performance embedding, vector search and reranking of knowledge from multiple sources.Continuous learning: AI query engines can store and incorporate feedback from AI-powered applications, creating an AI data flywheel in which the feedback is used to refine models and increase the effectiveness of the applications over time.Retrieval-augmented generation is a component of AI query engines. RAG uses the power of generative AI models to act as a natural language interface to data, allowing models to access and incorporate relevant information from large datasets during the response generation process.Using RAG, any business or other organization can turn its technical information, policy manuals, videos and other data into useful knowledge bases. An AI query engine can then rely on these sources to support such areas as customer relations, employee training and developer productivity.Additional information-retrieval techniques and ways to store knowledge are in research and development, so the capabilities of an AI query engine are expected to rapidly evolve.The Impact of AI Query EnginesUsing AI query engines, enterprises can fully harness the power of AI agents to connect their workforces to vast amounts of enterprise knowledge, improve the accuracy and relevance of AI-generated responses, process and utilize previously untapped data sources, and create data-driven AI flywheels that continuously improve their AI applications.Some examples include an AI virtual assistant that provides personalized, 24/7 customer service experiences, an AI agent for searching and summarizing video, an AI agent for analyzing software vulnerabilities or an AI research assistant.Bridging the gap between raw data and AI-powered applications, AI query engines will grow to play a crucial role in helping organizations extract value from their data.NVIDIA Blueprints can help enterprises get started connecting AI to their data. Learn more about NVIDIA Blueprints and try them in the NVIDIA API catalog.IDC, Global DataSphere Forecast, 2024.
    0 Commentaires ·0 Parts ·106 Vue
  • VISUAL EFFECTS AND ANIMATION BRING HISTORICAL EVENTS TO LIFE FOR DOCUMENTARIES
    www.vfxvoice.com
    By TREVOR HOGGWhen it comes to covering historical events, documentarians go on a journey to find and acquire rights to archive footage and photographs or fill in the visual gaps with talking heads or reenactments. In some cases, the reenactments are more about being authentic to the emotion of a moment than to the actual physical details. With technology becoming more affordable and accessible, the ability to have visual effects and animation within a tight budget has allowed for even more creative and innovative ways to bring the past to cinematic life.Bad RiverWe do social justice documentaries, states Andrew Sanderson, Associate Producer at 50 Eggs Films.Bad River deals with a Native American tribe called the Bad River Band, located in Northern Wisconsin, who are fighting for their sovereignty. Some things are happening now, and some things happened back in 1845 or 1850 that we dont have any photos, footage or music from, so we had to be creative when we were making the film. We want to tell stories the best we can. A lot of the Elders who we interviewed from the band would tell stories of Chief Buffalo, the historic chief of La Pointe Band of the Ojibwe, and other Ojibwe leaders going to Washington, D.C. in 1852 to try to convince President Millard Fillmore not to remove them from their land. These are stories that have been passed down from generation to generation, and its important for us to get it right but let the folks doing the interview tell their story.There is a great sense of community, so we wanted to include Bad River as much as we could in the filmmaking process. We would identify some local youth artists in the area. They would make sketches for us of different scenes or elements we were trying to capture. Then we take those sketches and give them to Punkrobot, an animation company in Chile, which would bring them to life. Andrew Sanderson, Associate Producer, 50 Eggs Films, Bad RiverIllustrations by Bad River Band youths as well as courtroom drawings were the inspiration for the animated sequences created by Punkrobot. (Images courtesy of 55 Eggs Films)Sanderson employed unique approaches to making the film. He remarks, There is a great sense of community, so we wanted to include Bad River as much as we could in the filmmaking process. We would identify local youth artists in the area, and they would make sketches for us of different scenes or elements we were trying to capture. Then we take those sketches and give them to Punkrobot, an animation company in Chile, which would bring them to life.Jackie ShaneAnimated sequences were expanded upon. There is a scene where one of the interviewees is describing when he was younger, people from the Bureau of Indian Affairs driving around the reservation trying to catch kids to bring them to boarding schools, Sanderson explains. We had one of our youth artists draw a man coming out of a car. Then we would have Punkrobot animate that and bring it even a step further into a whole animated sequence. Sometimes, it would transition to another still that we had or another piece of media, so it flowed well. In another example, we had licensed some black-and-white footage of the front lawn of the White House that had sheep eating the grass. We had Punkrobot sketch out what would be the next scene, and from there, it transitioned into the sketch of the interior of the White House where theyre plotting to take land from different reservations. A legal battle between the Bad River Band and Canadian oil and gas pipeline operator Enbridge is included. They had a case that was in Madison Western District Superior Court, so we werent allowed to have any photographs or recording devices in the court, but we wanted to show what was going on. We hired a courtroom sketch artist, told him who the key people were, and had him get a selection of sketches over two days. Then, we had Punkrobot animate those sketches to tell the story of what was going on in the courtroom when we couldnt have told it any other way visually. Sanderson adds, We basically used different mediums and blended them all together to make sequences that are visually appealing and can help bring people into the story.Machine learning and Stable Diffusion enabled the animated sequences to go from 15 to 40 minutes of screentime in Any Other Way: The Jackie Shane Story. (Images courtesy of Banger Films and the National Film Board of Canada)We developed an interesting visual effects process where we ended up with something that was shot relatively inexpensively, and through clever piecing together of strange techniques, we made it look as though 2,000 frames were painted by hand.Luca Tarantini, Director of Animation, Any Other Way: The Jackie Shane StoryPiecing together the life of a trans soul singer, who is revered along with her contemporaries Etta James and Little Richard, and who vanished from public view 40 years ago, is Any Other Way: The Jackie Shane Story, directed by Michael Mabbott and Lucah Rosenberg-Lee and produced by Banger Films and the National Film Board of Canada. We had to bring Jackies story to life, and roto seemed like a cost-effective way to do that because we are starting with an actor, not doing animation from scratch, which can be expensive and not look good if you dont have the right team, remarks Director of Animation Luca Tarantini. We developed an interesting visual effects process where we ended up with something that was shot relatively inexpensively, and through clever piecing together of strange techniques, we made it look as though 2,000 frames were painted by hand. Machine learning and Stable Diffusion were cornerstones of the animation process. Stable Diffusion is meant for you to type in a sentence, and it generates an image of that thing. But we were using it where you start with an image, type in a bit of prompt, and it gives you an interpretation of that original image. If you get the settings just right, it doesnt distort the original image enough but stylizes it in the correct way.Adding flares and working with a virtual set in the animated sequences for Any Other Way: The Jackie Shane Story. (Images courtesy of Banger Films and the National Film Board of Canada)As the edit evolved, it became clear that the animation was a major component of the storytelling and consequently went from 15 to 40 minutes of screen time. Not only did the amount of animation and the time we spent on it have to change, it became impossible without experimenting with new techniques to try to make it feasible for a tiny team of two or three people to deal with that volume of content, notes Co-Director of Animation Jared Raab. We managed to mix a bit of everything that everybody knew from shooting on an actual soundstage in a scrappy, music video-style way, greenscreen. Luca pioneered simple camera tracking to get camera position data for when he created the backgrounds, which were made in 3D using Cinema 4D, then I did a ton of Adobe After Effects work to create some of the 2D animation of the space. Last, Luca created entirely 3D lighting using the camera data to get the lens flares and some of the stuff that we loved from early archival music documentaries. It was a sprinkling of a little bit of everything that we knew how to make a film into the project, and the chemistry gave us just the right recipe to pull it off.Pigeon TunnelUnion VFX made a shift from working on feature films and high-end episodic to contributing to the Errol Morris documentary The Pigeon Tunnel, which explores the life and career of John le Carr through a series of one-on-one interviews with the former intelligence officer turned acclaimed novelist. Generally, visual effects for documentaries are all about enhancing the audiences understanding of the real-life events and subject matter that the narrator is talking about, observes David Schneider, DFX and Technical Supervisor for Union VFX. It is important for the work to focus on realism and subtle invisible effects that stay true to the historical moments being described during the interview. The core value of a documentary is to educate, so we generally have to keep augmentation minimal, not exaggerate, and retain a factually accurate depiction of events. Digital augmentation was not confined to one aspect, as there were 154 visual effects shots, and five assets had to be created. Schneider adds, We handled everything from equipment removal during interview shots to creating CG creatures and augmenting environments. The films many dramatizations gave Union VFX the chance to shine with standout assets, like an unlucky pigeon and a Soviet freighter. One of the highlights was a nighttime airplane sequence where we delivered several fully CG shots that brought the scene to life.For the Monte Carlo pigeon shoot sequence, we needed a close-up of a pigeon being shot out of the sky. To achieve this, we had to create an entirely new feather simulation system that captured the realistic movement of feathers when the pigeon was hit. While weve worked with CG birds before, this was the first time we had been so close to the camera that individual feathers were clearly visible.David Schneider, DFX and Technical Supervisor, Union VFX, The Pigeon TunnelUnion VFX handled everything from equipment removal during interview shots to creating CG creatures and augmenting environments for a total of 154 visual effects for The Pigeon Tunnel. (Images courtesy of Union VFX and Apple)Early on, Union VFX received detailed storyboard animatics. It helped us get on the same page, and since documentaries dont typically use heavy visual effects, this was invaluable, Schneider states. Some scenes required complex augmentation. For example, the sequence in which Kim Philby makes his escape to the Soviet Union required us to build the Dolmatova [a Soviet-era freighter], place it into provided plates, and enhance the surrounding dock with cargo and a digital gangway leading to the ship. All of this was integrated into the practical fog that was present on set. For the Monte Carlo pigeon shoot sequence, we needed a close-up of a pigeon being shot out of the sky. To achieve this, we had to create an entirely new feather simulation system that captured the realistic movement of feathers when the pigeon was hit. While weve worked with CG birds before, this was the first time we had been so close to the camera that individual feathers were clearly visible. We meticulously modeled the texture and styled the pigeons feathers to ensure they moved naturally, both in flight and when they detached from the bird.EnduranceCutting back and forth from the ill-fated 1915 Antarctica expedition to the South African research vessel S.A. Agulhas II searching the Weddell Sea in 2022 for the sunken ship captained by renowned Irish explorer Ernest Shackleton is the National Geographic documentary Endurance, directed by Elizabeth Chai Vasarhelyi, Jimmy Chin and Natalie Hewit. There were 28 men, and most of them wrote diaries or were able to tell their stories after the fact, so there is a lot of historical detail, states Producer Ruth Johnston. We used AI voice conversion technology so that every word that you hear is from one of seven guys [from the expedition] who lead us through the story [by reading from their writings]. Virtual content was built for three separate re-creations of three different campsites with various types of ice flows in the backgrounds. These ice flows were important because it was something we would not have been able to easily recreate in real life, remarks Virtual Production Supervisor Eve Roth. We color-corrected the virtual snow around the camp to match what the art department ended up putting down. Because we knew what kinds of harsh weather we were trying to recreate for the campsites, the virtual content was created in a way where we could dial up or down the wind and snow effects. We were also able to change the type of clouds in the sky, to dial that up and down.These ice flows were important because it was something we would not have been able to easily recreate in real life. We color-corrected the virtual snow around the camp to match what the art department ended up putting down. Because we knew what kinds of harsh weather we were trying to recreate for the campsites, the virtual content was created in a way where we could dial up or down the wind and snow effects. We were also able to change the type of clouds in the sky, to dial that up and down.Eve Roth, Virtual Production Supervisor, EnduranceStept Studios focused on the reenactments. We had the urge to chase some fancy camera work, but ultimately, we wanted to shoot it the same way Frank Hurley [Endurance Expeditions official photographer] would have on sticks with composed frames, explains Nick Martini, Founder and Creative Director of Stept Studios. This visual approach allowed us to intercut our footage with the archival material seamlessly. Most of the visual effects work was completed before production.Our efforts were centered around building the environments where the story takes place using Unreal Engine, Martini states. Those worlds were then projected in LED volume stages to be used as interactive backgrounds on a stage in Los Angeles. This allows for an organic in-camera look when we shoot and provides more realistic lighting than a traditional greenscreen approach. In post, some additional clean-up and effects were added to sell the gag.(Weddell Sea Pictures/Jasper Poore)(Photo: Jasper Poore. Image courtesy of Weddell Sea Pictures)(Photo: Frank Hurley. Image courtesy of BFI)(Photo: James Blake. Image courtesy of Falklands Maritime Heritage Trust)(Photo: Nick Birtwistle. Image courtesy of Falklands Maritime Heritage Trust)(Photo: Esther Horvath. Image courtesy of National Geographic)(Photo: Esther Horvath. Image courtesy of National Geographic)Intercut with contemporary footage of the expedition to find Endurance, the backstory of the sunken ship was told through historical photographs taken by Frank Hurley as well as reenactments taking place in a parking lot and LED volume.Atmospherics were added to the archival still photographs. We didnt want effects to overwhelm or take away from the original photography, rather to enhance the imagery or add impact in dramatic moments, states Josh Norton, Executive Creative Director and Founder of BigStar. Blowing smoke and snow were added only when we felt those moments of drama were necessary or the original photo called for it.Orienting the audience is a collection of maps showing the progression of both expeditions. The filmmakers had a desire to make sure the films graphics didnt feel too expected or conservative, Norton remarks. We were able to work with colorful type, energetic transitional language and texture while still making sure that we were being accurate to the historical research, especially on the maps. As for any lessons learned from the project, he replies, Dont go to the Weddell Sea without a backup plan!
    0 Commentaires ·0 Parts ·224 Vue