• Romeo Is A Dead Man Is Grasshopper Manufacture Firing On All Cylinders
    www.gamespot.com
    Romeo Is A Deadman is Grasshopper Manufacture latest action game created by Suda 51 and Ren Yamazaki. Its an all new IP from the developer, and it feels like Grasshopper is firing on all cylinders. Kurt Indovina got hands-on with the game, and got to fight through zombies, monsters, and a giant naked headless woman (which he was a big fan of). Heres the preview.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 0 Comentários ·0 Compartilhamentos
  • Genshin Impact Leak Reveals Three New Nod-Krai Bosses
    gamerant.com
    A recent leak from Genshin Impact is hinting at a trio of new bosses coming to the game with the release of Nod-Krai in Version 6.0. The massively popular RPG from HoYoverse is gearing up for one of its biggest updates of the year, with the Version 6.0 patch set to launch in just a couple of weeks. Genshin Impact's Version 6.0 update will officially introduce the newest playable region to the RPG, with Nod-Krai serving as the first explorable segment of Snezhnaya. Now, some major new enemies are on the way with Nod-Krai.
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • آبل تقاضي أوبو لسرقتها معلومات سرية بطريقة غير مشروعة! لا يمكن أن نغض الطرف عن هذه الفضيحة! كيف تجرؤ أوبو على انتهاك حقوق الآخرين بهذه الوقاحة؟ إن التجاوزات التقنية في عالم التكنولوجيا أصبحت عادية، ولكن هذه المرة وصلت إلى حد غير مقبول. هل أصبحنا نعيش في عصر يصبح فيه السطو على المعلومات أمرًا روتينيًا؟ آبل، كمصنع للتكنولوجيا، يجب أن تتخذ موقفًا صارمًا، لكن يجب علينا أيضًا كمستهلكين أن نكون أكثر وعيًا. لا نريد أن نكون ضحايا للسرقة الرقمية!

    #آبل #أوبو
    آبل تقاضي أوبو لسرقتها معلومات سرية بطريقة غير مشروعة! لا يمكن أن نغض الطرف عن هذه الفضيحة! كيف تجرؤ أوبو على انتهاك حقوق الآخرين بهذه الوقاحة؟ إن التجاوزات التقنية في عالم التكنولوجيا أصبحت عادية، ولكن هذه المرة وصلت إلى حد غير مقبول. هل أصبحنا نعيش في عصر يصبح فيه السطو على المعلومات أمرًا روتينيًا؟ آبل، كمصنع للتكنولوجيا، يجب أن تتخذ موقفًا صارمًا، لكن يجب علينا أيضًا كمستهلكين أن نكون أكثر وعيًا. لا نريد أن نكون ضحايا للسرقة الرقمية! #آبل #أوبو
    آبل تقاضي أوبو لسرقتها معلومات سرية بطريقة غير مشروعة!
    arabhardware.net
    The post آبل تقاضي أوبو لسرقتها معلومات سرية بطريقة غير مشروعة! appeared first on عرب هاردوير.
    Like
    Love
    Wow
    Angry
    Sad
    2K
    · 1 Comentários ·0 Compartilhamentos
  • Vampire: The Masquerade Bloodlines 2 Has A Boomer Vampire In It
    www.gamespot.com
    The Chinese Room is taking a big bite out of vampire culture in Seattle with its moody RPG.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 0 Comentários ·0 Compartilhamentos
  • World of Warcraft's Midnight Controvery Explained
    gamerant.com
    Ideally, now would be a great time to be a World of Warcraft fan, considering the recent Gamescom reveal of the Midnight expansion scheduled for next year. There are many noteworthy Midnight features to keep an eye on, even outside of player housing, if that is not something World of Warcraft fans are excited about. This includes the new Prey system that allows players to defeat powerful mobs to get cosmetic rewards, the addition of the Haranir as a playable race, a third Demon Hunter spec, extra talent points and Apex Talents, and more. However, the reveal has landed poorly among some fans for a plethora of reasons.
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.Welcome to the age of AI factories where the rules are being rewritten and the wiring doesnt look anything like the old internet. These arent typical hyperscale data centers. Theyre something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs not just built, but orchestrated, operated and activated as a single unit. And that orchestration? Its the whole game.This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture wont cut it. Whats needed is a layered design with bleeding-edge technologies like co-packaged optics that once seemed like science fiction.The complexity isnt a bug; its the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isnt rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.With that shift comes weight literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multihundredpound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables tightly wound and precisely routed. It almost as much data per second as the entire internet. Thats 130 TB/s of GPU-to-GPU bandwidth, fully meshed.This isnt just fast. Its foundational. The AI super-highway now lives inside the rack.The Data Center Is the ComputerTraining the modern large language models (LLMs) behind AI isnt about burning cycles on a single machine. Its about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices typically massive matrices of numbers need to be regularly merged and updated. That merging occurs through collective operations, such as all-reduce (which combines data from all nodes and redistributes the result) and all-to-all (where each node exchanges data with every other node).These processes are susceptible to the speed and responsiveness of the network what engineers call latency (delay) and bandwidth (data capacity) causing stalls in training.For inference the process of running trained models to generate answers or predictions the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.Traditional Ethernet was designed for single-server workloads not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, its a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance and that legacy still shapes their latest generations.Distributed computing requires a scale-out infrastructure built for zero-jitter operation one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. Its why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the worlds most powerful supercomputers, demonstrating 35% growth in just two years.For clusters spanning dozens of racks, NVIDIA QuantumX800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates copackaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA SpectrumX: a new kind of Ethernet purpose-built for distributed AI.SpectrumX Ethernet: Bringing AI to the EnterpriseSpectrumX reimagines Ethernet for AI. Launched in 2023 SpectrumX delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIAs congestion control to maintain 95% data throughput at scale.SpectrumX is fully standardsbased Ethernet. In addition to supporting Cumulus Linux, it supports the opensource SONiC network operating system giving customers flexibility. A key ingredient is NVIDIA SuperNICs based on NVIDIA BlueField-3 or ConnectX-8 which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.Spectrum-X brings InfiniBands best innovations like telemetry-driven congestion control, adaptive load balancing and direct data placement to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with SpectrumX, including the worlds most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.A Portfolio for ScaleUp and ScaleOutNo single network can serve every layer of an AI factory. NVIDIAs approach is to match the right fabric to the right tier, then tie everything together with software and silicon.NVLink: Scale Up Inside the RackInside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8GPU server. With NVLink, the entire rack becomes one large GPU.Photonics: The Next LeapTo reach millionGPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawattscale AI factories.Delivering on the Promise of Open StandardsSpectrumX and NVIDIA Quantum InfiniBand are built on open standards. SpectrumX is fully standardsbased Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Associations InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIAs software stack including NCCL and DOCA libraries run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack GPUs, NICs, switches, cables and software. Vendors that invest in endtoend integration deliver better latency and throughput. SONiC, the opensource network operating system hardened in hyperscale data centers, eliminates licensing and vendor lockin and allows intense customization, but operators still choose purposebuilt hardware and software bundles to meet AIs performance needs. In practice, open standards alone dont deliver deterministic performance; they need innovation layered on top.Toward MillionGPU AI FactoriesAI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIApowered AI infrastructure. The next horizon is gigawattclass facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • DIGITAL DOMAIN SCALES BACK FOR GREATER EFFECT ON THUNDERBOLTS*
    www.vfxvoice.com
    By TREVOR HOGGImages courtesy of Digital Domain and Marvel Studios.Banding together in Thunderbolts* is a group of criminal misfits, comprised of Yelena Belova, John Walker, Ava Starr, Bucky Barnes, Red Guardian and Taskmaster, who embark on a mission under the direction of filmmaker Jake Schreier, with Jake Morrison providing digital support. Contributing nearly 200 shots was Digital Domain, which was assigned the vault fight, elevator shaft escape, a surreal moment with a Meth Chicken, and creating digital doubles for Yelena Belova, John Walker and Ava Starr that were shared with other participating vendors.Whats great about this movie is that [director] Jake Schreier wanted to ground everything and have things be a lot smaller than we normally would propose. The first version of our explosion with Taskmasters arrow tip was big. Jake was like, I want it a lot smaller. Jake [Morrison] kept dialing it down in size because he felt it shouldnt be overwhelming. That was the philosophy for a lot of the effects in the tasks that we had in hand in visual effects.Nikos Kalaitzidis, VFX Supervisor, Digital DomainMotion blur was a key component of creating the Ghost Effect.One of the variables would be if we looked at the shots assigned to us and had Yelena as a mid to background character, explains Nikos Kalaitzidis, VFX Supervisor at Digital Domain. We might have cut corners and built her differently, but we were the primary vendor that created this character, which had to be shared with other vendors that had to build her more hero-like. We had to make sure that the pores on her skin and face were top quality, and we could create and match the photographic reference provided to us along with the scans. Even though other vendors have their own proprietary software, which is normally a different renderer or rigging system, we provided everything we had once [the character] was completed, such as the model, displacement, textures, reference photography, renders and HDRIs used to create the final turntable.Sparks were treated as 3D assets, which allowed them to be better integrated into shots as interactive light.Serving as the antagonist is the Void, a cruel, dark entity that lives within a superhuman being suffering from amnesia known as Sentry aka Robert Bob Reynolds. In Bobs past life, he was a drug addict, and during a bout of depression he goes back to a dark memory, Kalaitzidis states. As a side job, Bob wore a chicken suit trying to sell something on the side of the road while high on meth. This is one of those sequences that was thought up afterwards as part of the reshoots. The Thunderbolts go into Bobs brain, which has different rooms, and enter a closet that causes them to fall out into a different dimension where its the Meth Chicken universe. A lot of clothes keep falling from the closet until they enter a different door that takes them somewhere else. We only had a few weeks to do it. We had to ensure that everything shot on set had a certain feel and look to it that worked with all of the surrounding sequences. What was interesting about this is they shot it, not with greenscreen, but an old-fashioned matte painting. Our job wasnt to replace the matte painting with a digital one that had more depth, but to seamlessly have the ground meld into that matte painting and make things darker to fit the surrounding environments.As part of the set extension work, the elevator shaft was made to appear as if it was a mile long.There is a point in time where they try to save themselves and go through the threshold at the top of the elevator shaft. Most of them fall and had to be replaced with digital doubles, which meant using the assets we created, having CFX for their cloth and hair, and making sure that the performances and physics were working well from one shot to another.Nikos Kalaitzidis, VFX Supervisor, Digital DomainConstructed as a 100-foot-long, 24-foot-high practical set, the vault still had to be digitally augmented to achieve the necessary size and scope. There were certain parts of it that we needed to do, like set extensions for the ceiling or incinerator vents or hallways that go outside of the vault, Kalaitzidis remarks. There was one hallway with the elevator shaft they built, and we provided three different hallways with variations for each one if the Thunderbolts needed to escape. Contributing to the complexity was the stunt work. We pride ourselves on going from the stunt person to the main actor or actress. There was a lot of choreography that either had to be re-timed and re-performed so it feels like the hits are landing on the other actor and the weapons are hitting the shields. The arm of the Taskmaster had to be re-timed while fighting John Walker. Kalaitzidis notes, They are fighting sword to shield, and the re-time in editorial didnt work out because there was a lot of pauses during the stunt performance. We took out those pauses and made sure there was a certain flow to the fight of the arm hitting shield. We keyframed the arm in 2D to have a different choreography to ensure that both actors were fighting as intended.The new helmet for Ghost makes use of a white mesh.Multiple elements were shot when Walker throws Yelena across the vault. Normally, with a shot like that we would do the hand-off of the stunt person to the main actor during the whip pan, Kalaitzidis explains. But in this particular case, the director wanted us to zoom in on the main actress after the stunt actress hits the ground. The camera was more or less handheld, so we had to realign both cameras to make sure that they were working together. The ground and background had to be redone in CG. The most important part was, how do we see both the stunt actress and Florence Pugh? That was done, in part, by matchmoving both characters and lining them up as close as possible. We even had a digital double as a between, but what helped us was accidentally coming up a new solution with our Charlatan software. When using Charlatan to swap the face, the artist noticed that he could also do the hair down to the shoulders. All of a sudden, he began to blend both plates together, and it became a glorified morphing tool. There is another shot where Walker does a kip-up. One of the stunt guys springs off his hands and lands on his feet. We had to do the same thing but using a digital double of his character and lining it up with the actor who lands at the [desired] place. We matchmoved his performance, did an animation, and used the Charlatan software to blend both bodies. It turned out to be seamless.The live-action blazes from Backdraft were a point a reference when creating the fire flood.The elevator shaft had to be extended digitally so it appears to be a mile long. We had to come up with a look of how it goes into the abyss, which feels like a signature for a lot of different sequences throughout the movie, Kalaitzidis states. They shot the live-action set, which had a certain amount of texture. Jake felt that the textures inside of the set could be more reflective, so we had to enhance the live-action set to blend seamlessly with the set extension of the shaft that goes into darkness. They had safety harnesses to pull them, which had to be removed. There is a point in time where they try to save themselves and go through the threshold at the top of the elevator shaft. Most of them fall and had to be replaced with digital doubles, which meant using the assets we created, having CFX for their cloth and hair, and making sure that the performances and physics were working well from one shot to another.When youre phasing in and out, you might have four heads, and we called each one of those a leaf [a term coined by Visual Effects Supervisor Jake Morrison]. With those leaves we would make sure that they had different opacities, blurs and z-depths, so we had more complexity for each of them. As the leaves separate into different opacities, we also see them coming together. There is a certain choreography that we had in animation to achieve that.Nikos Kalaitzidis, VFX Supervisor, Digital DomainDigital Domain contributed nearly 200 visual effects shots, with lighting being a major component of the plate augmentation.Sparks are always fun to simulate. I always like 3D sparks because theyre more integrated, Kalaitzidis remarks. We also take the sparks and give them to our lighting department to use as interactive light. The same thing with 2D sparks, which have a great dynamic range within the plate and crank up the explosion to create interactive light as well. Explosions tended to be restrained. Whats great about this movie is that Jake Schreier wanted to ground everything and have things be a lot smaller than we normally would propose. The first version of our explosion with Taskmasters arrow tip was big. Jake was like, I want it a lot smaller. Jake kept dialing it down in size because he felt it shouldnt be overwhelming. That was the philosophy for a lot of the effects in the tasks that we had in hand in visual effects. A particular movie directed by Ron Howard was a point of reference. Kalaitzidis explains, Jake Morrison told us, Take a look at the fires in Backdraft because they are all live-action. There was a lot of slow motion. Looking at the texture and fire, and how the fire transmits into smoke, studying the smoke combined with the fire, we used a lot of that to adhere to our incinerator shot.A slower mechanical approach was adopted for the opening and closing of the helmet worn by the Taskmaster.Costumes and effects get upgraded for every movie, with Ghost (Ava Starr) being a significant example this time. Ava can phase out for up to a minute, so she has a bit more control over her phasing power, Kalaitzidis states. This is interesting because it leads to how the phasing is used for choreography when shes fighting and reveals the ultimate sucker punch where she disappears one second, comes back and kicks someone in the face. How we got there was looking at a lot of the footage in Ant-Man. We did it similar but subtler. The plates were matchmoved with the actress; we gave it to our animation team, which offset the performance, left, right, forward, back in time and space. Then in lighting we rendered it out at different shutters, and one long shutter to give it a dreamy look and another that had no shutter so it was sharp when we wanted it that was handed to compositing, which had a template to put it all together because there were a lot of various renders going on at that point. It was a craft between animation, lighting and compositing to dial it in the way Jake Schreier wanted it.A physicality needed to be conveyed for the Ghost Effect. We would recreate the wall in 3D and make sure that as Ava is phasing through in 3D space, she doesnt look like a dissolve but actually appears to be coming out of that wall as her body is transforming through it, Kalaitzidis explains. That was a technique used wherever we could. Another key thing that was tricky was, because we had some long shutters in the beginning in trying to develop this new look, it started to make her feel that she had a super speed. We had to dial back the motion blurs that gave us these long streaks, which looked cool but implied a different sort of power. Multiple layers of effects had to be orchestrated like a dance. When youre phasing in and out, you might have four heads, and we called each one of those a leaf [a term coined by Morrison]. With those leaves we would make sure that they had different opacities, blurs and z-depths, so we had more complexity for each one of them. As the leaves separate into different opacities, we also see them coming together. There is a certain choreography that we had in animation to achieve that.Stunt rehearsals were critical in choreographing the fight between Taskmaster and Ghost inside the vault.Explosions were dialed down to make them more believable.[Ghost (Ava Starr)] can phase out for up to a minute, so she has a bit more control over her phasing power. This is interesting because it leads to how the phasing is used for choreography when shes fighting and reveals the ultimate sucker punch where she disappears one second, comes back and kicks someone in the face. How we got there was looking at a lot of the footage in Ant-Man. We did it similar but subtler.Nikos Kalaitzidis, VFX Supervisor, Digital DomainConstructing the Cryo Case to store Bob was a highlight. It was one of those effects that no one will pay attention to in the movie in regard to how much thought went into it, Kalaitzidis observes. We went through a concept stage with the previs department to come up with almost a dozen different looks for the inside of the Cyro Case. Digital Domain was responsible for how the energy is discharged from Yelenas bracelet for the Widow Bite effect. That was fun because it was established in Black Widow and was a red effect. We went from red to blue, and the Widow Bite was like the explosion when we first did it; it was big arcs of electricity, and Jake Schreier had us dial it down and be more grounded, so we made it smaller and smaller. Not only is it the electricity shooting out as a projectile and hitting someones body, but what does the bracelet look like? We did some look development as if theres an energy source inside of the bracelet.Contributing to the integration of the vault fight was the burning paper found throughout the environment.Allowing the quick opening and closing of the of helmet for Ghost was the conceit that it utilizes nanomite technology.Helmets proved to be challenging. In the MCU, there are these helmets that have nanomite technology, which justifies why they can open and close so fast in a matter of four to six frames, Kalaitzidis states. Ghost had a cool new helmet that had a certain white mesh. We had to break the helmet up into different parts to make it feel mechanical while receding and closing. That happened quickly because there are lot of shots of her where she touches a button on a collar and opens up, and you want to see her performance quickly. It worked well with the cut. For the Taskmaster, we only see it once, and Jake wanted the effect to be more mechanical. It wasnt nanomite technology, and he didnt want to have it magical. Unlike the other helmets, it had to be nice and slow. We had to make sure that it worked with the actors face and skin so it doesnt go through her body and also works with the hoodie. As the helmet goes back, you see the hoodie wrinkle, and it does the same thing when closing.Contributing to the surrealness are the Thunderbolts entering the dark recesses of Bobs mind and encountering his time spent as a chicken mascot high on meth.One of the more complex shots to execute was the fire flood effect in the vault. If the room was exploding, we had a lot of paper on the ground and ran a simulation on that so it would get affected, Kalaitzidis remarks. Then they would run a lighting pass to make sure whatever explosion was happening would light the characters, the crates in the room and ceiling to ensure everything was well integrated. A collaborative mentality prevailed during the production of Thunderbolts*. We were graced with having such a great team and working closely with Jake Morrison. Having him in the same room with Jake Schreier during reviews so we could understand what he was going through and wanted, and the sort of effects he was looking for, was helpful.Watch an informative video breakdown of Digital Domains amazing VFX work on the vault fight and elevator shaft escape for Thunderbolts*. Click here. https://www.youtube.com/watch?v=d0DtdBriMHg
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • NBA 2K26: Hands-on report and PS5 bundle details, launching September 5
    blog.playstation.com
    The official start of the season may be two months away, but basketball is back with NBA 2K26 hitting PS5 and PS4 September 5. The latest entry brings a new gameplay system powered by machine learning, studying todays superstars, and fun pick-up and play options. 2K invited me to go hands-on with the game before it launches September 5 on PS5, and Im here to share what I learned on the court.Also launching starting on September 5 in select markets is the PlayStation 5 Console NBA 2K26 Bundle. Read on for full detailsBetter ball2K26 puts considerable effort into improving both sides of the floor, with notable offensive and defensive enhancements. New machine-learning technology helps capture the fundamentals of the game. While playing, I noticed players would run and get set by firmly planting their feet, instead of a gliding effect. While driving into the paint, they would also stop and accurately respond to a defender in their lane. These details add a realistic weight to the sport.Enhanced Rhythm ShootingYou can still flick down-up on the right analog stick or simply press square to start your shooting motion, then release at the correct timing for the individual players shoot release. However, now the tempo of the play, like in real life, affects your shot. When a good defender bogged me down, I could quickly release my shot and intentionally release it early for a decisive bucket. With a high basketball IQ, any shot has the potential to be a good shot.Defensive battlesPlayers can swing a game in their favor if the shots arent falling, thanks to new improvements centered around real-world tactics. Around the players feet, you will see new Rebound Timing Feedback as a green meter that will flash to indicate a well-timed rebound. Learning Chet Holmgrens rebound timing made me nearly unstoppable under the rim and made me focus on an aspect of the game I had neglected before.Collisions and interior defense both benefit from a revamped system-driven tech that allows for more real-time interactions instead of scripted mocap animations. If you want to stop a fast break or crowd the lane, players will stop, adjust, and even collide realistically. The game rewards paying attention to the action when the ball isnt in your hand.Arena atmosphereThe devs also upgraded the game spectacle during downtime and timeouts with new crowd variety, interactions, and on-court performances. Cheerleader routines and mascot antics are fun, but my favorite by far was the dance cam. These moments captured the feel of attending a game live and the sense of community that attending a sporting event can create.MyTEAM updatesMyTEAM has received a significant remodel with Triple Threat Park turning Sunset Beach into a nighttime venue. Players are greeted with neon lights, fireworks, and other details that can only be appreciated after dark. Pulling cards and collecting players has also become an even bigger spectacle with dramatic reveals and added flair.The biggest change to MyTEAM is that WNBA players join the action for the first time in series history. Newcomers like Angel Reese and Caitlin Clark take to the hardwood along with legends like Lisa Leslie. Attributes and Badges are identical for all players, no matter what league they hail from. Also, there is a WNBA Domination tier where your squad will be exclusively WNBA players as you challenge teams to earn Domination stars and crests.Another first is 2v2 games in Triple Threat Park. Two half courts have been added in the middle of the street, where you can run your favorite two-person team-ups. The park also features four 3v3 courts, including a new option with a beach backdrop, and three 3v3 courts for 6-player co-op matches. These games capture the essence of streetball, featuring players calling their fouls, checking the ball at midcourt, and engaging in some lively trash talka great way to mix and match your favorite ball players and have some quick, high-energy games.All-Star Team Up is now part of MyTEAM, where 10 players duke it out in 5v5 co-op matches. Take your favorite NBA or WNBA players for some very high-level play where being a good role player is the key to success. Earn individual rewards with the new Season Ladder and earn rewards as a team by winning matches. Find the right chemistry with your teammates, because for every five games you win with the same team lineup, everyone will receive rewards, even if the wins arent consecutive.Discover all the new enhancements coming to the court when NBA 2K 26 launches September 5 on PS5.Vertical Stand sold separatelyPS5 Console NBA 2K26 BundleWere pleased to announce the PlayStation 5 Console NBA 2K26 Bundle is launching in select markets starting September 5. Release dates and availability may vary by region, please check direct.playstation.com where available or your local retailer for availability and release dates.Players can feel the on-the-court immersion made possible by the DualSense wireless controllers haptic feedback and adaptive triggers. Experience NBA 2K26s authenticity with lifelike animations, heightened player fidelity and authentic atmosphere with 4K resolution*, and enjoy shortened load times and return to the action faster with the PS5 consoles high-speed SSD.Bundle includes a PlayStation 5 console, DualSense wireless controller, and a digital voucher** for NBA 2K26 Standard Edition.With a robust focus on features and the game aspects that dont rely on the players, its great to play and watch. No matter your height, you should hit the court when NBA 2K26 comes to PS5 and PS4 on September 5.*4K and HDR require a 4K and HDR compatible TV or display.**Account for PlayStation and internet connection required to redeem voucher
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • Pokmon Mystery Dungeon is a criminally underrated roguelike
    www.polygon.com
    The Mystery Dungeon games share a core formula: you wake up as a human turned into a pocket monster, you choose a partner, and then descend through procedurally generated dungeons to rescue your fellow Pokmon and uncover the reason behind your transformation. Its straightforward and kid-friendly, but the series doesn't shy away from complex themes or challenging boss battles. All 11 entries in this spinoff franchise continue the loop of dungeon spelunking and rescue missions, while layering on a dramatic plot that appeals to both fans of roguelikes and the classic Pokmon titles.
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
  • Beyond The Hype: What AI Can Really Do For Product Design
    smashingmagazine.com
    These days, its easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. Whats much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer not for experimentation, but for real, meaningful outcomes.Ive gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, Ive built a simple, repeatable workflow that significantly boosts my productivity.In this article, Ill share whats already working and break down some of the most common objections Ive encountered many of which Ive faced personally.Stage 1: Idea Generation Without The ClichsPushback: Whenever I ask AI to suggest ideas, I just get a list of clichs. It cant produce the kind of creative thinking expected from a product designer.Thats a fair point. AI doesnt know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to feed it all the documentation you have. But thats a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AIs answers become vague and unfocused.Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the lost in the middle problem.To get meaningful results, AI doesnt just need more information it needs the right information, delivered in the right way. Thats where the RAG approach comes in.How RAG WorksThink of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of card catalog, called a vector database.When you ask a question, the assistant doesnt reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.How Is This Different from Just Dumping a Doc into the Chat?Lets break it down:Typical chat interactionIts like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is in front of them, but its easy to miss something, especially if its in the middle. This is exactly what the lost in the middle issue refers to.RAG approachYou ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. Its faster and more accurate, but it introduces a few new risks:Ambiguous questionYou ask, How can we make the project safer? and the assistant brings you documents about cybersecurity, not finance.Mixed chunksA single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant cant tell what the core topic is.Semantic gapYou ask, How can we speed up the app? but the document says, Optimize API response time. For a human, thats obviously related. For a machine, not always.These arent reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?Start With Three Short, Focused DocumentsThese three short documents will give your AI assistant just enough context to be genuinely helpful:Product Overview & ScenariosA brief summary of what your product does and the core user scenarios.Target AudienceYour main user segments and their key needs or goals.Research & ExperimentsKey insights from interviews, surveys, user testing, or product analytics.Each document should focus on a single topic and ideally stay within 300500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.Language MattersIn practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:English prompt + English documents: Consistently accurate and relevant results.Non-English prompt + English documents: Quality dropped sharply. The AI struggled to match the query with the right content.Non-English prompt + non-English documents: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, youre free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval.From Outsider to Teammate: Giving AI the Context It NeedsOnce your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas the way a mid-level or senior designer would.Heres an example of a prompt that works well for me:Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.Please include:Possible overlaps in user goals, actions, or scenarios;Potential confusion if both features are launched at the same time;Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;Onboarding screens or explanatory elements that might help users understand both features.If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.AI Needs Context, Not Just PromptsIf you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information.Building a usable knowledge base isnt difficult. And you dont need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AIs responses are. Thats your first step in turning AI from a novelty into a practical tool in your product design workflow.Stage 2: Prototyping and Visual ExperimentsPushback: AI only generates obvious solutions and cant even build a proper user flow. Its faster to do it manually.Thats a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can flip to reveal a prize. I couldnt recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.At the prototyping stage, AI can be a strong creative partner in two areas:UI element ideationIt can generate dozens of interactive patterns, including ones you might not think of yourself.Micro-animation generationIt can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.AI can also be applied to multi-screen prototypes, but its not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks individual screens, elements, or animations where it can kick off the thinking process and save hours of trial and error.A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.Heres another valuable way to use AI in design as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasnt to generate new UI, but to check how well AI could operate inside existing layouts placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input a clear gain in design accuracy, not just speed.That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.Stage 3: Finalizing The Interface And Visual StylePushback: AI cant match our visual style. Its easier to just do it by hand.This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often dont feel like they belong in your product. They tend to be either overly decorative or overly simplified.And this is a real limitation. In my experience, todays models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:Direct integration with a component library.I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. Other designers have run into similar issues, noting that library support in Figma Make is still limited and often unstable.Uploading styles as JSON.Instead of a full component library, I tried uploading only the exported styles colors, fonts in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.Two-step approach: structure first, style second.What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result though still far from pixel-perfect.So yes, AI still cant help you finalize your UI. It doesnt replace hand-crafted design work. But its very useful in other ways:Quickly creating a visual concept for discussion.Generating what if alternatives to existing mockups.Exploring how your interface might look in a different style or direction.Acting as a second pair of eyes by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.AI wont save you five hours of high-fidelity design time, since youll probably spend that long fixing its output. But as a visual sparring partner, its already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.Stage 4: Product Feedback And Analytics: AI As A Thinking ExosuitProduct designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isnt in inventing data but in processing it at scale.Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.Simply counting the percentages for each of the five predefined reasons wasnt enough. I wanted to know:Are there specific times of day when users churn more?Do the reasons differ by region?Is there a correlation between user exits and system load?The real challenge was... figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done for me by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldnt have been able to reach that level of insight on my own at all.AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for whats truly valuable: asking the right questions.A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.AI Is Not An Autopilot But A Co-PilotAI in design is only as good as the questions you ask it. It doesnt do the work for you. It doesnt replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes its still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer. But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Dont wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.Lets SummarizeIf you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. Thats the lost in the middle problem.The RAG approach helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.Clear, focused prompts work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.A well-structured knowledge bas makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.Use English for both your prompts and your documents. Even multilingual models are most reliable when working in English, especially for retrieval.Most importantly: treat AI as a creative partner. It wont replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.Further ReadingAI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality, Cindy BrummerThis piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on before you dive into RAG or idea generation.8 essential tips for using Figma Make, Alexia DantonWhile this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if youre experimenting with visual tools and structured prompting.What Is Retrieval-Augmented Generation aka RAG, Rick MerrittIf you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.
    Like
    Love
    Wow
    Sad
    Angry
    2K
    · 0 Comentários ·0 Compartilhamentos
CGShares https://cgshares.com